00:00:00.000 Started by upstream project "autotest-spdk-v24.05-vs-dpdk-v22.11" build number 112 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3290 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.086 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.087 The recommended git tool is: git 00:00:00.087 using credential 00000000-0000-0000-0000-000000000002 00:00:00.090 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.132 Fetching changes from the remote Git repository 00:00:00.134 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.168 Using shallow fetch with depth 1 00:00:00.168 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.168 > git --version # timeout=10 00:00:00.201 > git --version # 'git version 2.39.2' 00:00:00.201 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.216 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.216 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.486 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.495 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.507 Checking out Revision 456d80899d5187c68de113852b37bde1201fd33a (FETCH_HEAD) 00:00:06.507 > git config core.sparsecheckout # timeout=10 00:00:06.519 > git read-tree -mu HEAD # timeout=10 00:00:06.534 > git checkout -f 456d80899d5187c68de113852b37bde1201fd33a # timeout=5 00:00:06.559 Commit message: "jenkins/config: Drop WFP25 for maintenance" 00:00:06.559 > git rev-list --no-walk 456d80899d5187c68de113852b37bde1201fd33a # timeout=10 00:00:06.680 [Pipeline] Start of Pipeline 00:00:06.693 [Pipeline] library 00:00:06.695 Loading library shm_lib@master 00:00:06.695 Library shm_lib@master is cached. Copying from home. 00:00:06.709 [Pipeline] node 00:00:06.715 Running on VM-host-WFP7 in /var/jenkins/workspace/nvme-vg-autotest 00:00:06.718 [Pipeline] { 00:00:06.726 [Pipeline] catchError 00:00:06.727 [Pipeline] { 00:00:06.739 [Pipeline] wrap 00:00:06.749 [Pipeline] { 00:00:06.758 [Pipeline] stage 00:00:06.761 [Pipeline] { (Prologue) 00:00:06.783 [Pipeline] echo 00:00:06.785 Node: VM-host-WFP7 00:00:06.792 [Pipeline] cleanWs 00:00:06.811 [WS-CLEANUP] Deleting project workspace... 00:00:06.811 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.817 [WS-CLEANUP] done 00:00:07.000 [Pipeline] setCustomBuildProperty 00:00:07.078 [Pipeline] httpRequest 00:00:07.117 [Pipeline] echo 00:00:07.119 Sorcerer 10.211.164.101 is alive 00:00:07.127 [Pipeline] httpRequest 00:00:07.131 HttpMethod: GET 00:00:07.132 URL: http://10.211.164.101/packages/jbp_456d80899d5187c68de113852b37bde1201fd33a.tar.gz 00:00:07.132 Sending request to url: http://10.211.164.101/packages/jbp_456d80899d5187c68de113852b37bde1201fd33a.tar.gz 00:00:07.149 Response Code: HTTP/1.1 200 OK 00:00:07.149 Success: Status code 200 is in the accepted range: 200,404 00:00:07.150 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_456d80899d5187c68de113852b37bde1201fd33a.tar.gz 00:00:11.744 [Pipeline] sh 00:00:12.029 + tar --no-same-owner -xf jbp_456d80899d5187c68de113852b37bde1201fd33a.tar.gz 00:00:12.050 [Pipeline] httpRequest 00:00:12.076 [Pipeline] echo 00:00:12.078 Sorcerer 10.211.164.101 is alive 00:00:12.092 [Pipeline] httpRequest 00:00:12.096 HttpMethod: GET 00:00:12.097 URL: http://10.211.164.101/packages/spdk_241d0f3c94f275e2bee7a7c76d26b4d9fc729108.tar.gz 00:00:12.098 Sending request to url: http://10.211.164.101/packages/spdk_241d0f3c94f275e2bee7a7c76d26b4d9fc729108.tar.gz 00:00:12.123 Response Code: HTTP/1.1 200 OK 00:00:12.124 Success: Status code 200 is in the accepted range: 200,404 00:00:12.124 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_241d0f3c94f275e2bee7a7c76d26b4d9fc729108.tar.gz 00:01:32.669 [Pipeline] sh 00:01:32.953 + tar --no-same-owner -xf spdk_241d0f3c94f275e2bee7a7c76d26b4d9fc729108.tar.gz 00:01:35.505 [Pipeline] sh 00:01:35.789 + git -C spdk log --oneline -n5 00:01:35.789 241d0f3c9 test: fix dpdk builds on ubuntu24 00:01:35.789 327de4622 test/bdev: Skip "hidden" nvme devices from the sysfs 00:01:35.789 5fa2f5086 nvme: add lock_depth for ctrlr_lock 00:01:35.789 330a4f94d nvme: check pthread_mutex_destroy() return value 00:01:35.789 7b72c3ced nvme: add nvme_ctrlr_lock 00:01:35.809 [Pipeline] withCredentials 00:01:35.819 > git --version # timeout=10 00:01:35.833 > git --version # 'git version 2.39.2' 00:01:35.849 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:35.852 [Pipeline] { 00:01:35.862 [Pipeline] retry 00:01:35.864 [Pipeline] { 00:01:35.881 [Pipeline] sh 00:01:36.164 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:01:36.176 [Pipeline] } 00:01:36.200 [Pipeline] // retry 00:01:36.206 [Pipeline] } 00:01:36.228 [Pipeline] // withCredentials 00:01:36.240 [Pipeline] httpRequest 00:01:36.258 [Pipeline] echo 00:01:36.260 Sorcerer 10.211.164.101 is alive 00:01:36.270 [Pipeline] httpRequest 00:01:36.276 HttpMethod: GET 00:01:36.276 URL: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:36.277 Sending request to url: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:36.278 Response Code: HTTP/1.1 200 OK 00:01:36.278 Success: Status code 200 is in the accepted range: 200,404 00:01:36.279 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:37.569 [Pipeline] sh 00:01:37.852 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:39.244 [Pipeline] sh 00:01:39.607 + git -C dpdk log --oneline -n5 00:01:39.607 caf0f5d395 version: 22.11.4 00:01:39.607 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:39.607 dc9c799c7d vhost: fix missing spinlock unlock 00:01:39.607 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:39.607 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:39.624 [Pipeline] writeFile 00:01:39.645 [Pipeline] sh 00:01:39.933 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:39.946 [Pipeline] sh 00:01:40.231 + cat autorun-spdk.conf 00:01:40.231 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:40.231 SPDK_TEST_NVME=1 00:01:40.231 SPDK_TEST_FTL=1 00:01:40.231 SPDK_TEST_ISAL=1 00:01:40.231 SPDK_RUN_ASAN=1 00:01:40.231 SPDK_RUN_UBSAN=1 00:01:40.231 SPDK_TEST_XNVME=1 00:01:40.231 SPDK_TEST_NVME_FDP=1 00:01:40.231 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:40.231 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:40.231 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:40.238 RUN_NIGHTLY=1 00:01:40.240 [Pipeline] } 00:01:40.254 [Pipeline] // stage 00:01:40.269 [Pipeline] stage 00:01:40.271 [Pipeline] { (Run VM) 00:01:40.287 [Pipeline] sh 00:01:40.571 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:40.572 + echo 'Start stage prepare_nvme.sh' 00:01:40.572 Start stage prepare_nvme.sh 00:01:40.572 + [[ -n 3 ]] 00:01:40.572 + disk_prefix=ex3 00:01:40.572 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:40.572 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:40.572 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:40.572 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:40.572 ++ SPDK_TEST_NVME=1 00:01:40.572 ++ SPDK_TEST_FTL=1 00:01:40.572 ++ SPDK_TEST_ISAL=1 00:01:40.572 ++ SPDK_RUN_ASAN=1 00:01:40.572 ++ SPDK_RUN_UBSAN=1 00:01:40.572 ++ SPDK_TEST_XNVME=1 00:01:40.572 ++ SPDK_TEST_NVME_FDP=1 00:01:40.572 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:40.572 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:40.572 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:40.572 ++ RUN_NIGHTLY=1 00:01:40.572 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:40.572 + nvme_files=() 00:01:40.572 + declare -A nvme_files 00:01:40.572 + backend_dir=/var/lib/libvirt/images/backends 00:01:40.572 + nvme_files['nvme.img']=5G 00:01:40.572 + nvme_files['nvme-cmb.img']=5G 00:01:40.572 + nvme_files['nvme-multi0.img']=4G 00:01:40.572 + nvme_files['nvme-multi1.img']=4G 00:01:40.572 + nvme_files['nvme-multi2.img']=4G 00:01:40.572 + nvme_files['nvme-openstack.img']=8G 00:01:40.572 + nvme_files['nvme-zns.img']=5G 00:01:40.572 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:40.572 + (( SPDK_TEST_FTL == 1 )) 00:01:40.572 + nvme_files["nvme-ftl.img"]=6G 00:01:40.572 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:40.572 + nvme_files["nvme-fdp.img"]=1G 00:01:40.572 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:40.572 + for nvme in "${!nvme_files[@]}" 00:01:40.572 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi2.img -s 4G 00:01:40.572 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:40.572 + for nvme in "${!nvme_files[@]}" 00:01:40.572 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-ftl.img -s 6G 00:01:40.572 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:40.572 + for nvme in "${!nvme_files[@]}" 00:01:40.572 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-cmb.img -s 5G 00:01:40.572 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:40.572 + for nvme in "${!nvme_files[@]}" 00:01:40.572 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-openstack.img -s 8G 00:01:40.572 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:40.572 + for nvme in "${!nvme_files[@]}" 00:01:40.572 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-zns.img -s 5G 00:01:40.831 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:40.831 + for nvme in "${!nvme_files[@]}" 00:01:40.831 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi1.img -s 4G 00:01:40.831 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:40.831 + for nvme in "${!nvme_files[@]}" 00:01:40.831 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi0.img -s 4G 00:01:40.831 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:40.831 + for nvme in "${!nvme_files[@]}" 00:01:40.831 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-fdp.img -s 1G 00:01:40.831 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:40.831 + for nvme in "${!nvme_files[@]}" 00:01:40.831 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme.img -s 5G 00:01:41.089 Formatting '/var/lib/libvirt/images/backends/ex3-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:41.089 ++ sudo grep -rl ex3-nvme.img /etc/libvirt/qemu 00:01:41.089 + echo 'End stage prepare_nvme.sh' 00:01:41.089 End stage prepare_nvme.sh 00:01:41.101 [Pipeline] sh 00:01:41.432 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:41.432 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex3-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex3-nvme.img -b /var/lib/libvirt/images/backends/ex3-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex3-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora38 00:01:41.432 00:01:41.432 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:41.432 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:41.432 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:41.432 HELP=0 00:01:41.432 DRY_RUN=0 00:01:41.432 NVME_FILE=/var/lib/libvirt/images/backends/ex3-nvme-ftl.img,/var/lib/libvirt/images/backends/ex3-nvme.img,/var/lib/libvirt/images/backends/ex3-nvme-multi0.img,/var/lib/libvirt/images/backends/ex3-nvme-fdp.img, 00:01:41.432 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:41.432 NVME_AUTO_CREATE=0 00:01:41.432 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img,, 00:01:41.432 NVME_CMB=,,,, 00:01:41.432 NVME_PMR=,,,, 00:01:41.432 NVME_ZNS=,,,, 00:01:41.432 NVME_MS=true,,,, 00:01:41.432 NVME_FDP=,,,on, 00:01:41.432 SPDK_VAGRANT_DISTRO=fedora38 00:01:41.432 SPDK_VAGRANT_VMCPU=10 00:01:41.432 SPDK_VAGRANT_VMRAM=12288 00:01:41.432 SPDK_VAGRANT_PROVIDER=libvirt 00:01:41.432 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:41.432 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:41.432 SPDK_OPENSTACK_NETWORK=0 00:01:41.432 VAGRANT_PACKAGE_BOX=0 00:01:41.432 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:41.432 FORCE_DISTRO=true 00:01:41.432 VAGRANT_BOX_VERSION= 00:01:41.432 EXTRA_VAGRANTFILES= 00:01:41.432 NIC_MODEL=virtio 00:01:41.432 00:01:41.432 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt' 00:01:41.432 /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:43.967 Bringing machine 'default' up with 'libvirt' provider... 00:01:44.535 ==> default: Creating image (snapshot of base box volume). 00:01:44.793 ==> default: Creating domain with the following settings... 00:01:44.793 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721758784_edaf630c8eef3d5e9c95 00:01:44.793 ==> default: -- Domain type: kvm 00:01:44.794 ==> default: -- Cpus: 10 00:01:44.794 ==> default: -- Feature: acpi 00:01:44.794 ==> default: -- Feature: apic 00:01:44.794 ==> default: -- Feature: pae 00:01:44.794 ==> default: -- Memory: 12288M 00:01:44.794 ==> default: -- Memory Backing: hugepages: 00:01:44.794 ==> default: -- Management MAC: 00:01:44.794 ==> default: -- Loader: 00:01:44.794 ==> default: -- Nvram: 00:01:44.794 ==> default: -- Base box: spdk/fedora38 00:01:44.794 ==> default: -- Storage pool: default 00:01:44.794 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721758784_edaf630c8eef3d5e9c95.img (20G) 00:01:44.794 ==> default: -- Volume Cache: default 00:01:44.794 ==> default: -- Kernel: 00:01:44.794 ==> default: -- Initrd: 00:01:44.794 ==> default: -- Graphics Type: vnc 00:01:44.794 ==> default: -- Graphics Port: -1 00:01:44.794 ==> default: -- Graphics IP: 127.0.0.1 00:01:44.794 ==> default: -- Graphics Password: Not defined 00:01:44.794 ==> default: -- Video Type: cirrus 00:01:44.794 ==> default: -- Video VRAM: 9216 00:01:44.794 ==> default: -- Sound Type: 00:01:44.794 ==> default: -- Keymap: en-us 00:01:44.794 ==> default: -- TPM Path: 00:01:44.794 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:44.794 ==> default: -- Command line args: 00:01:44.794 ==> default: -> value=-device, 00:01:44.794 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:44.794 ==> default: -> value=-drive, 00:01:44.794 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:44.794 ==> default: -> value=-device, 00:01:44.794 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:44.794 ==> default: -> value=-device, 00:01:44.794 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:44.794 ==> default: -> value=-drive, 00:01:44.794 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme.img,if=none,id=nvme-1-drive0, 00:01:44.794 ==> default: -> value=-device, 00:01:44.794 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:44.794 ==> default: -> value=-device, 00:01:44.794 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:44.794 ==> default: -> value=-drive, 00:01:44.794 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:44.794 ==> default: -> value=-device, 00:01:44.794 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:44.794 ==> default: -> value=-drive, 00:01:44.794 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:44.794 ==> default: -> value=-device, 00:01:44.794 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:44.794 ==> default: -> value=-drive, 00:01:44.794 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:44.794 ==> default: -> value=-device, 00:01:44.794 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:44.794 ==> default: -> value=-device, 00:01:44.794 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:44.794 ==> default: -> value=-device, 00:01:44.794 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:44.794 ==> default: -> value=-drive, 00:01:44.794 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:44.794 ==> default: -> value=-device, 00:01:44.794 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:44.794 ==> default: Creating shared folders metadata... 00:01:44.794 ==> default: Starting domain. 00:01:46.186 ==> default: Waiting for domain to get an IP address... 00:02:04.363 ==> default: Waiting for SSH to become available... 00:02:04.363 ==> default: Configuring and enabling network interfaces... 00:02:08.571 default: SSH address: 192.168.121.248:22 00:02:08.571 default: SSH username: vagrant 00:02:08.571 default: SSH auth method: private key 00:02:09.951 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:18.112 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:23.402 ==> default: Mounting SSHFS shared folder... 00:02:25.306 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:02:25.306 ==> default: Checking Mount.. 00:02:26.679 ==> default: Folder Successfully Mounted! 00:02:26.679 ==> default: Running provisioner: file... 00:02:27.652 default: ~/.gitconfig => .gitconfig 00:02:27.909 00:02:27.909 SUCCESS! 00:02:27.909 00:02:27.909 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:02:27.909 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:27.909 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:02:27.909 00:02:27.918 [Pipeline] } 00:02:27.937 [Pipeline] // stage 00:02:27.947 [Pipeline] dir 00:02:27.948 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt 00:02:27.950 [Pipeline] { 00:02:27.962 [Pipeline] catchError 00:02:27.964 [Pipeline] { 00:02:27.976 [Pipeline] sh 00:02:28.254 + vagrant ssh-config --host vagrant 00:02:28.254 + sed -ne /^Host/,$p 00:02:28.254 + tee ssh_conf 00:02:31.533 Host vagrant 00:02:31.533 HostName 192.168.121.248 00:02:31.533 User vagrant 00:02:31.533 Port 22 00:02:31.534 UserKnownHostsFile /dev/null 00:02:31.534 StrictHostKeyChecking no 00:02:31.534 PasswordAuthentication no 00:02:31.534 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:02:31.534 IdentitiesOnly yes 00:02:31.534 LogLevel FATAL 00:02:31.534 ForwardAgent yes 00:02:31.534 ForwardX11 yes 00:02:31.534 00:02:31.553 [Pipeline] withEnv 00:02:31.556 [Pipeline] { 00:02:31.575 [Pipeline] sh 00:02:31.853 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:31.853 source /etc/os-release 00:02:31.853 [[ -e /image.version ]] && img=$(< /image.version) 00:02:31.853 # Minimal, systemd-like check. 00:02:31.853 if [[ -e /.dockerenv ]]; then 00:02:31.853 # Clear garbage from the node's name: 00:02:31.853 # agt-er_autotest_547-896 -> autotest_547-896 00:02:31.853 # $HOSTNAME is the actual container id 00:02:31.853 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:31.853 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:31.853 # We can assume this is a mount from a host where container is running, 00:02:31.853 # so fetch its hostname to easily identify the target swarm worker. 00:02:31.853 container="$(< /etc/hostname) ($agent)" 00:02:31.853 else 00:02:31.853 # Fallback 00:02:31.853 container=$agent 00:02:31.853 fi 00:02:31.853 fi 00:02:31.853 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:31.853 00:02:32.121 [Pipeline] } 00:02:32.144 [Pipeline] // withEnv 00:02:32.155 [Pipeline] setCustomBuildProperty 00:02:32.171 [Pipeline] stage 00:02:32.173 [Pipeline] { (Tests) 00:02:32.192 [Pipeline] sh 00:02:32.471 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:32.742 [Pipeline] sh 00:02:33.021 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:33.297 [Pipeline] timeout 00:02:33.297 Timeout set to expire in 40 min 00:02:33.300 [Pipeline] { 00:02:33.317 [Pipeline] sh 00:02:33.647 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:34.212 HEAD is now at 241d0f3c9 test: fix dpdk builds on ubuntu24 00:02:34.224 [Pipeline] sh 00:02:34.503 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:34.773 [Pipeline] sh 00:02:35.050 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:35.325 [Pipeline] sh 00:02:35.606 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:02:35.865 ++ readlink -f spdk_repo 00:02:35.865 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:35.865 + [[ -n /home/vagrant/spdk_repo ]] 00:02:35.865 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:35.865 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:35.865 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:35.865 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:35.865 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:35.865 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:35.865 + cd /home/vagrant/spdk_repo 00:02:35.865 + source /etc/os-release 00:02:35.865 ++ NAME='Fedora Linux' 00:02:35.865 ++ VERSION='38 (Cloud Edition)' 00:02:35.865 ++ ID=fedora 00:02:35.865 ++ VERSION_ID=38 00:02:35.865 ++ VERSION_CODENAME= 00:02:35.865 ++ PLATFORM_ID=platform:f38 00:02:35.865 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:02:35.865 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:35.865 ++ LOGO=fedora-logo-icon 00:02:35.865 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:02:35.865 ++ HOME_URL=https://fedoraproject.org/ 00:02:35.865 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:02:35.865 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:35.865 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:35.865 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:35.865 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:02:35.865 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:35.865 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:02:35.865 ++ SUPPORT_END=2024-05-14 00:02:35.865 ++ VARIANT='Cloud Edition' 00:02:35.865 ++ VARIANT_ID=cloud 00:02:35.865 + uname -a 00:02:35.865 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:02:35.865 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:36.432 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:36.690 Hugepages 00:02:36.690 node hugesize free / total 00:02:36.690 node0 1048576kB 0 / 0 00:02:36.690 node0 2048kB 0 / 0 00:02:36.690 00:02:36.690 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:36.948 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:36.948 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:02:36.948 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:36.948 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:36.948 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:36.948 + rm -f /tmp/spdk-ld-path 00:02:36.948 + source autorun-spdk.conf 00:02:36.948 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:36.948 ++ SPDK_TEST_NVME=1 00:02:36.948 ++ SPDK_TEST_FTL=1 00:02:36.948 ++ SPDK_TEST_ISAL=1 00:02:36.948 ++ SPDK_RUN_ASAN=1 00:02:36.948 ++ SPDK_RUN_UBSAN=1 00:02:36.948 ++ SPDK_TEST_XNVME=1 00:02:36.948 ++ SPDK_TEST_NVME_FDP=1 00:02:36.948 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:36.948 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:36.948 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:36.948 ++ RUN_NIGHTLY=1 00:02:36.948 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:36.948 + [[ -n '' ]] 00:02:36.948 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:36.948 + for M in /var/spdk/build-*-manifest.txt 00:02:36.948 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:36.948 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:36.948 + for M in /var/spdk/build-*-manifest.txt 00:02:36.948 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:36.948 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:36.948 ++ uname 00:02:36.948 + [[ Linux == \L\i\n\u\x ]] 00:02:36.948 + sudo dmesg -T 00:02:36.948 + sudo dmesg --clear 00:02:37.214 + dmesg_pid=6110 00:02:37.214 + sudo dmesg -Tw 00:02:37.214 + [[ Fedora Linux == FreeBSD ]] 00:02:37.214 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:37.214 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:37.214 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:37.214 + [[ -x /usr/src/fio-static/fio ]] 00:02:37.214 + export FIO_BIN=/usr/src/fio-static/fio 00:02:37.214 + FIO_BIN=/usr/src/fio-static/fio 00:02:37.214 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:37.214 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:37.214 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:37.214 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:37.214 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:37.214 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:37.214 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:37.214 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:37.214 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:37.214 Test configuration: 00:02:37.214 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:37.214 SPDK_TEST_NVME=1 00:02:37.214 SPDK_TEST_FTL=1 00:02:37.214 SPDK_TEST_ISAL=1 00:02:37.214 SPDK_RUN_ASAN=1 00:02:37.214 SPDK_RUN_UBSAN=1 00:02:37.214 SPDK_TEST_XNVME=1 00:02:37.214 SPDK_TEST_NVME_FDP=1 00:02:37.214 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:37.214 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:37.214 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:37.214 RUN_NIGHTLY=1 18:20:37 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:37.214 18:20:37 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:37.214 18:20:37 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:37.214 18:20:37 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:37.214 18:20:37 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:37.214 18:20:37 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:37.214 18:20:37 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:37.214 18:20:37 -- paths/export.sh@5 -- $ export PATH 00:02:37.214 18:20:37 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:37.214 18:20:37 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:37.214 18:20:37 -- common/autobuild_common.sh@440 -- $ date +%s 00:02:37.214 18:20:37 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1721758837.XXXXXX 00:02:37.215 18:20:37 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1721758837.brFJZ3 00:02:37.215 18:20:37 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:02:37.215 18:20:37 -- common/autobuild_common.sh@446 -- $ '[' -n v22.11.4 ']' 00:02:37.215 18:20:37 -- common/autobuild_common.sh@447 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:37.215 18:20:37 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:37.215 18:20:37 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:37.215 18:20:37 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:37.215 18:20:37 -- common/autobuild_common.sh@456 -- $ get_config_params 00:02:37.215 18:20:37 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:02:37.215 18:20:37 -- common/autotest_common.sh@10 -- $ set +x 00:02:37.215 18:20:37 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:02:37.215 18:20:37 -- common/autobuild_common.sh@458 -- $ start_monitor_resources 00:02:37.215 18:20:37 -- pm/common@17 -- $ local monitor 00:02:37.215 18:20:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:37.215 18:20:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:37.215 18:20:37 -- pm/common@25 -- $ sleep 1 00:02:37.215 18:20:37 -- pm/common@21 -- $ date +%s 00:02:37.215 18:20:37 -- pm/common@21 -- $ date +%s 00:02:37.215 18:20:37 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721758837 00:02:37.215 18:20:37 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721758837 00:02:37.215 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721758837_collect-vmstat.pm.log 00:02:37.215 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721758837_collect-cpu-load.pm.log 00:02:38.164 18:20:38 -- common/autobuild_common.sh@459 -- $ trap stop_monitor_resources EXIT 00:02:38.164 18:20:38 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:38.164 18:20:38 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:38.164 18:20:38 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:38.164 18:20:38 -- spdk/autobuild.sh@16 -- $ date -u 00:02:38.164 Tue Jul 23 06:20:38 PM UTC 2024 00:02:38.164 18:20:38 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:38.164 v24.05-15-g241d0f3c9 00:02:38.164 18:20:38 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:38.164 18:20:38 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:38.164 18:20:38 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:02:38.164 18:20:38 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:02:38.164 18:20:38 -- common/autotest_common.sh@10 -- $ set +x 00:02:38.164 ************************************ 00:02:38.164 START TEST asan 00:02:38.164 ************************************ 00:02:38.164 using asan 00:02:38.164 18:20:38 asan -- common/autotest_common.sh@1121 -- $ echo 'using asan' 00:02:38.164 00:02:38.164 real 0m0.000s 00:02:38.164 user 0m0.000s 00:02:38.164 sys 0m0.000s 00:02:38.164 18:20:38 asan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:02:38.164 18:20:38 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:38.164 ************************************ 00:02:38.164 END TEST asan 00:02:38.164 ************************************ 00:02:38.423 18:20:38 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:38.423 18:20:38 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:38.423 18:20:38 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:02:38.423 18:20:38 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:02:38.423 18:20:38 -- common/autotest_common.sh@10 -- $ set +x 00:02:38.423 ************************************ 00:02:38.423 START TEST ubsan 00:02:38.423 ************************************ 00:02:38.423 using ubsan 00:02:38.423 18:20:38 ubsan -- common/autotest_common.sh@1121 -- $ echo 'using ubsan' 00:02:38.423 00:02:38.423 real 0m0.000s 00:02:38.423 user 0m0.000s 00:02:38.423 sys 0m0.000s 00:02:38.423 18:20:38 ubsan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:02:38.423 18:20:38 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:38.423 ************************************ 00:02:38.423 END TEST ubsan 00:02:38.423 ************************************ 00:02:38.423 18:20:38 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:02:38.423 18:20:38 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:38.423 18:20:38 -- common/autobuild_common.sh@432 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:38.423 18:20:38 -- common/autotest_common.sh@1097 -- $ '[' 2 -le 1 ']' 00:02:38.423 18:20:38 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:02:38.423 18:20:38 -- common/autotest_common.sh@10 -- $ set +x 00:02:38.423 ************************************ 00:02:38.423 START TEST build_native_dpdk 00:02:38.423 ************************************ 00:02:38.423 18:20:38 build_native_dpdk -- common/autotest_common.sh@1121 -- $ _build_native_dpdk 00:02:38.423 18:20:38 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:38.423 18:20:38 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:38.423 18:20:38 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:38.423 18:20:38 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:38.423 18:20:38 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:38.423 18:20:38 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:38.424 18:20:38 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:38.424 18:20:38 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:38.424 18:20:38 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:38.424 18:20:38 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:38.424 18:20:38 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:38.424 18:20:38 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:38.424 18:20:38 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:38.424 18:20:38 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:38.424 18:20:38 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:38.424 18:20:38 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:38.424 18:20:38 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:38.424 18:20:38 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:38.424 18:20:38 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:38.424 18:20:38 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:38.424 caf0f5d395 version: 22.11.4 00:02:38.424 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:02:38.424 dc9c799c7d vhost: fix missing spinlock unlock 00:02:38.424 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:02:38.424 6ef77f2a5e net/gve: fix RX buffer size alignment 00:02:38.424 18:20:38 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:38.424 18:20:38 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:38.424 18:20:38 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:02:38.424 18:20:38 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:38.424 18:20:38 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:38.424 18:20:38 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:38.424 18:20:38 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:38.424 18:20:38 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:38.424 18:20:38 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:38.424 18:20:38 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:38.424 18:20:38 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:38.424 18:20:38 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:38.424 18:20:38 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:38.424 18:20:38 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:38.424 18:20:38 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:38.424 18:20:38 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:02:38.424 18:20:38 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:38.424 18:20:38 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:02:38.424 18:20:38 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:02:38.424 18:20:38 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:02:38.424 18:20:38 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:02:38.424 18:20:38 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:02:38.424 18:20:38 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:02:38.424 18:20:38 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:02:38.424 18:20:38 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:02:38.424 18:20:38 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:02:38.424 18:20:38 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:02:38.424 18:20:38 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:02:38.424 18:20:38 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:02:38.424 18:20:38 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:02:38.424 18:20:38 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:02:38.424 18:20:38 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:02:38.424 18:20:38 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:38.424 18:20:38 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 22 00:02:38.424 18:20:38 build_native_dpdk -- scripts/common.sh@350 -- $ local d=22 00:02:38.424 18:20:38 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:38.424 18:20:38 build_native_dpdk -- scripts/common.sh@352 -- $ echo 22 00:02:38.424 18:20:38 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=22 00:02:38.424 18:20:38 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:02:38.424 18:20:38 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:02:38.424 18:20:38 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:38.424 18:20:38 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:02:38.424 18:20:38 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:02:38.424 18:20:38 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:02:38.424 18:20:38 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:02:38.424 18:20:38 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:38.424 patching file config/rte_config.h 00:02:38.424 Hunk #1 succeeded at 60 (offset 1 line). 00:02:38.424 18:20:38 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 22.11.4 24.07.0 00:02:38.424 18:20:38 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:02:38.424 18:20:38 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:02:38.424 18:20:38 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:02:38.424 18:20:38 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:02:38.424 18:20:38 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:02:38.424 18:20:38 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:02:38.424 18:20:38 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:02:38.424 18:20:38 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:02:38.424 18:20:38 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:02:38.424 18:20:38 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:02:38.424 18:20:38 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:02:38.424 18:20:38 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:02:38.424 18:20:38 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:02:38.424 18:20:38 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:02:38.424 18:20:38 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:38.424 18:20:38 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 22 00:02:38.424 18:20:38 build_native_dpdk -- scripts/common.sh@350 -- $ local d=22 00:02:38.424 18:20:38 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:38.424 18:20:38 build_native_dpdk -- scripts/common.sh@352 -- $ echo 22 00:02:38.424 18:20:38 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=22 00:02:38.424 18:20:38 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 24 00:02:38.424 18:20:38 build_native_dpdk -- scripts/common.sh@350 -- $ local d=24 00:02:38.424 18:20:38 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:38.424 18:20:38 build_native_dpdk -- scripts/common.sh@352 -- $ echo 24 00:02:38.424 18:20:38 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=24 00:02:38.424 18:20:38 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:02:38.424 18:20:38 build_native_dpdk -- scripts/common.sh@365 -- $ (( ver1[v] < ver2[v] )) 00:02:38.424 18:20:38 build_native_dpdk -- scripts/common.sh@365 -- $ return 0 00:02:38.424 18:20:38 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:02:38.424 patching file lib/pcapng/rte_pcapng.c 00:02:38.424 Hunk #1 succeeded at 110 (offset -18 lines). 00:02:38.424 18:20:38 build_native_dpdk -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:02:38.424 18:20:38 build_native_dpdk -- common/autobuild_common.sh@181 -- $ uname -s 00:02:38.424 18:20:38 build_native_dpdk -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:02:38.424 18:20:38 build_native_dpdk -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:38.424 18:20:38 build_native_dpdk -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:43.689 The Meson build system 00:02:43.689 Version: 1.3.1 00:02:43.689 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:43.689 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:43.689 Build type: native build 00:02:43.689 Program cat found: YES (/usr/bin/cat) 00:02:43.689 Project name: DPDK 00:02:43.689 Project version: 22.11.4 00:02:43.689 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:43.689 C linker for the host machine: gcc ld.bfd 2.39-16 00:02:43.689 Host machine cpu family: x86_64 00:02:43.689 Host machine cpu: x86_64 00:02:43.689 Message: ## Building in Developer Mode ## 00:02:43.689 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:43.689 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:43.689 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:43.689 Program objdump found: YES (/usr/bin/objdump) 00:02:43.689 Program python3 found: YES (/usr/bin/python3) 00:02:43.689 Program cat found: YES (/usr/bin/cat) 00:02:43.689 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:43.689 Checking for size of "void *" : 8 00:02:43.689 Checking for size of "void *" : 8 (cached) 00:02:43.689 Library m found: YES 00:02:43.689 Library numa found: YES 00:02:43.689 Has header "numaif.h" : YES 00:02:43.689 Library fdt found: NO 00:02:43.689 Library execinfo found: NO 00:02:43.689 Has header "execinfo.h" : YES 00:02:43.689 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:43.689 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:43.689 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:43.689 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:43.689 Run-time dependency openssl found: YES 3.0.9 00:02:43.689 Run-time dependency libpcap found: YES 1.10.4 00:02:43.689 Has header "pcap.h" with dependency libpcap: YES 00:02:43.689 Compiler for C supports arguments -Wcast-qual: YES 00:02:43.689 Compiler for C supports arguments -Wdeprecated: YES 00:02:43.689 Compiler for C supports arguments -Wformat: YES 00:02:43.689 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:43.689 Compiler for C supports arguments -Wformat-security: NO 00:02:43.689 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:43.689 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:43.689 Compiler for C supports arguments -Wnested-externs: YES 00:02:43.689 Compiler for C supports arguments -Wold-style-definition: YES 00:02:43.689 Compiler for C supports arguments -Wpointer-arith: YES 00:02:43.689 Compiler for C supports arguments -Wsign-compare: YES 00:02:43.689 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:43.689 Compiler for C supports arguments -Wundef: YES 00:02:43.689 Compiler for C supports arguments -Wwrite-strings: YES 00:02:43.689 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:43.689 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:43.689 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:43.689 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:43.689 Compiler for C supports arguments -mavx512f: YES 00:02:43.689 Checking if "AVX512 checking" compiles: YES 00:02:43.689 Fetching value of define "__SSE4_2__" : 1 00:02:43.689 Fetching value of define "__AES__" : 1 00:02:43.689 Fetching value of define "__AVX__" : 1 00:02:43.689 Fetching value of define "__AVX2__" : 1 00:02:43.689 Fetching value of define "__AVX512BW__" : 1 00:02:43.689 Fetching value of define "__AVX512CD__" : 1 00:02:43.689 Fetching value of define "__AVX512DQ__" : 1 00:02:43.689 Fetching value of define "__AVX512F__" : 1 00:02:43.689 Fetching value of define "__AVX512VL__" : 1 00:02:43.690 Fetching value of define "__PCLMUL__" : 1 00:02:43.690 Fetching value of define "__RDRND__" : 1 00:02:43.690 Fetching value of define "__RDSEED__" : 1 00:02:43.690 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:43.690 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:43.690 Message: lib/kvargs: Defining dependency "kvargs" 00:02:43.690 Message: lib/telemetry: Defining dependency "telemetry" 00:02:43.690 Checking for function "getentropy" : YES 00:02:43.690 Message: lib/eal: Defining dependency "eal" 00:02:43.690 Message: lib/ring: Defining dependency "ring" 00:02:43.690 Message: lib/rcu: Defining dependency "rcu" 00:02:43.690 Message: lib/mempool: Defining dependency "mempool" 00:02:43.690 Message: lib/mbuf: Defining dependency "mbuf" 00:02:43.690 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:43.690 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:43.690 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:43.690 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:43.690 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:43.690 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:43.690 Compiler for C supports arguments -mpclmul: YES 00:02:43.690 Compiler for C supports arguments -maes: YES 00:02:43.690 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:43.690 Compiler for C supports arguments -mavx512bw: YES 00:02:43.690 Compiler for C supports arguments -mavx512dq: YES 00:02:43.690 Compiler for C supports arguments -mavx512vl: YES 00:02:43.690 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:43.690 Compiler for C supports arguments -mavx2: YES 00:02:43.690 Compiler for C supports arguments -mavx: YES 00:02:43.690 Message: lib/net: Defining dependency "net" 00:02:43.690 Message: lib/meter: Defining dependency "meter" 00:02:43.690 Message: lib/ethdev: Defining dependency "ethdev" 00:02:43.690 Message: lib/pci: Defining dependency "pci" 00:02:43.690 Message: lib/cmdline: Defining dependency "cmdline" 00:02:43.690 Message: lib/metrics: Defining dependency "metrics" 00:02:43.690 Message: lib/hash: Defining dependency "hash" 00:02:43.690 Message: lib/timer: Defining dependency "timer" 00:02:43.690 Fetching value of define "__AVX2__" : 1 (cached) 00:02:43.690 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:43.690 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:43.690 Fetching value of define "__AVX512CD__" : 1 (cached) 00:02:43.690 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:43.690 Message: lib/acl: Defining dependency "acl" 00:02:43.690 Message: lib/bbdev: Defining dependency "bbdev" 00:02:43.690 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:43.690 Run-time dependency libelf found: YES 0.190 00:02:43.690 Message: lib/bpf: Defining dependency "bpf" 00:02:43.690 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:43.690 Message: lib/compressdev: Defining dependency "compressdev" 00:02:43.690 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:43.690 Message: lib/distributor: Defining dependency "distributor" 00:02:43.690 Message: lib/efd: Defining dependency "efd" 00:02:43.690 Message: lib/eventdev: Defining dependency "eventdev" 00:02:43.690 Message: lib/gpudev: Defining dependency "gpudev" 00:02:43.690 Message: lib/gro: Defining dependency "gro" 00:02:43.690 Message: lib/gso: Defining dependency "gso" 00:02:43.690 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:43.690 Message: lib/jobstats: Defining dependency "jobstats" 00:02:43.690 Message: lib/latencystats: Defining dependency "latencystats" 00:02:43.690 Message: lib/lpm: Defining dependency "lpm" 00:02:43.690 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:43.690 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:43.690 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:43.690 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:43.690 Message: lib/member: Defining dependency "member" 00:02:43.690 Message: lib/pcapng: Defining dependency "pcapng" 00:02:43.690 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:43.690 Message: lib/power: Defining dependency "power" 00:02:43.690 Message: lib/rawdev: Defining dependency "rawdev" 00:02:43.690 Message: lib/regexdev: Defining dependency "regexdev" 00:02:43.690 Message: lib/dmadev: Defining dependency "dmadev" 00:02:43.690 Message: lib/rib: Defining dependency "rib" 00:02:43.690 Message: lib/reorder: Defining dependency "reorder" 00:02:43.690 Message: lib/sched: Defining dependency "sched" 00:02:43.690 Message: lib/security: Defining dependency "security" 00:02:43.690 Message: lib/stack: Defining dependency "stack" 00:02:43.690 Has header "linux/userfaultfd.h" : YES 00:02:43.690 Message: lib/vhost: Defining dependency "vhost" 00:02:43.690 Message: lib/ipsec: Defining dependency "ipsec" 00:02:43.690 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:43.690 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:43.690 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:43.690 Message: lib/fib: Defining dependency "fib" 00:02:43.690 Message: lib/port: Defining dependency "port" 00:02:43.690 Message: lib/pdump: Defining dependency "pdump" 00:02:43.690 Message: lib/table: Defining dependency "table" 00:02:43.690 Message: lib/pipeline: Defining dependency "pipeline" 00:02:43.690 Message: lib/graph: Defining dependency "graph" 00:02:43.690 Message: lib/node: Defining dependency "node" 00:02:43.690 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:43.690 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:43.690 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:43.690 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:43.690 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:43.690 Compiler for C supports arguments -Wno-unused-value: YES 00:02:43.690 Compiler for C supports arguments -Wno-format: YES 00:02:43.690 Compiler for C supports arguments -Wno-format-security: YES 00:02:43.690 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:43.690 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:46.219 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:46.219 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:46.219 Fetching value of define "__AVX2__" : 1 (cached) 00:02:46.219 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:46.219 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:46.219 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:46.219 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:46.219 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:46.219 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:46.219 Program doxygen found: YES (/usr/bin/doxygen) 00:02:46.219 Configuring doxy-api.conf using configuration 00:02:46.219 Program sphinx-build found: NO 00:02:46.219 Configuring rte_build_config.h using configuration 00:02:46.219 Message: 00:02:46.219 ================= 00:02:46.219 Applications Enabled 00:02:46.219 ================= 00:02:46.219 00:02:46.219 apps: 00:02:46.219 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:02:46.219 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:02:46.219 test-security-perf, 00:02:46.219 00:02:46.219 Message: 00:02:46.219 ================= 00:02:46.219 Libraries Enabled 00:02:46.219 ================= 00:02:46.219 00:02:46.219 libs: 00:02:46.219 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:02:46.219 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:02:46.219 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:02:46.219 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:02:46.219 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:02:46.219 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:02:46.219 table, pipeline, graph, node, 00:02:46.219 00:02:46.219 Message: 00:02:46.219 =============== 00:02:46.219 Drivers Enabled 00:02:46.219 =============== 00:02:46.219 00:02:46.219 common: 00:02:46.219 00:02:46.219 bus: 00:02:46.219 pci, vdev, 00:02:46.220 mempool: 00:02:46.220 ring, 00:02:46.220 dma: 00:02:46.220 00:02:46.220 net: 00:02:46.220 i40e, 00:02:46.220 raw: 00:02:46.220 00:02:46.220 crypto: 00:02:46.220 00:02:46.220 compress: 00:02:46.220 00:02:46.220 regex: 00:02:46.220 00:02:46.220 vdpa: 00:02:46.220 00:02:46.220 event: 00:02:46.220 00:02:46.220 baseband: 00:02:46.220 00:02:46.220 gpu: 00:02:46.220 00:02:46.220 00:02:46.220 Message: 00:02:46.220 ================= 00:02:46.220 Content Skipped 00:02:46.220 ================= 00:02:46.220 00:02:46.220 apps: 00:02:46.220 00:02:46.220 libs: 00:02:46.220 kni: explicitly disabled via build config (deprecated lib) 00:02:46.220 flow_classify: explicitly disabled via build config (deprecated lib) 00:02:46.220 00:02:46.220 drivers: 00:02:46.220 common/cpt: not in enabled drivers build config 00:02:46.220 common/dpaax: not in enabled drivers build config 00:02:46.220 common/iavf: not in enabled drivers build config 00:02:46.220 common/idpf: not in enabled drivers build config 00:02:46.220 common/mvep: not in enabled drivers build config 00:02:46.220 common/octeontx: not in enabled drivers build config 00:02:46.220 bus/auxiliary: not in enabled drivers build config 00:02:46.220 bus/dpaa: not in enabled drivers build config 00:02:46.220 bus/fslmc: not in enabled drivers build config 00:02:46.220 bus/ifpga: not in enabled drivers build config 00:02:46.220 bus/vmbus: not in enabled drivers build config 00:02:46.220 common/cnxk: not in enabled drivers build config 00:02:46.220 common/mlx5: not in enabled drivers build config 00:02:46.220 common/qat: not in enabled drivers build config 00:02:46.220 common/sfc_efx: not in enabled drivers build config 00:02:46.220 mempool/bucket: not in enabled drivers build config 00:02:46.220 mempool/cnxk: not in enabled drivers build config 00:02:46.220 mempool/dpaa: not in enabled drivers build config 00:02:46.220 mempool/dpaa2: not in enabled drivers build config 00:02:46.220 mempool/octeontx: not in enabled drivers build config 00:02:46.220 mempool/stack: not in enabled drivers build config 00:02:46.220 dma/cnxk: not in enabled drivers build config 00:02:46.220 dma/dpaa: not in enabled drivers build config 00:02:46.220 dma/dpaa2: not in enabled drivers build config 00:02:46.220 dma/hisilicon: not in enabled drivers build config 00:02:46.220 dma/idxd: not in enabled drivers build config 00:02:46.220 dma/ioat: not in enabled drivers build config 00:02:46.220 dma/skeleton: not in enabled drivers build config 00:02:46.220 net/af_packet: not in enabled drivers build config 00:02:46.220 net/af_xdp: not in enabled drivers build config 00:02:46.220 net/ark: not in enabled drivers build config 00:02:46.220 net/atlantic: not in enabled drivers build config 00:02:46.220 net/avp: not in enabled drivers build config 00:02:46.220 net/axgbe: not in enabled drivers build config 00:02:46.220 net/bnx2x: not in enabled drivers build config 00:02:46.220 net/bnxt: not in enabled drivers build config 00:02:46.220 net/bonding: not in enabled drivers build config 00:02:46.220 net/cnxk: not in enabled drivers build config 00:02:46.220 net/cxgbe: not in enabled drivers build config 00:02:46.220 net/dpaa: not in enabled drivers build config 00:02:46.220 net/dpaa2: not in enabled drivers build config 00:02:46.220 net/e1000: not in enabled drivers build config 00:02:46.220 net/ena: not in enabled drivers build config 00:02:46.220 net/enetc: not in enabled drivers build config 00:02:46.220 net/enetfec: not in enabled drivers build config 00:02:46.220 net/enic: not in enabled drivers build config 00:02:46.220 net/failsafe: not in enabled drivers build config 00:02:46.220 net/fm10k: not in enabled drivers build config 00:02:46.220 net/gve: not in enabled drivers build config 00:02:46.220 net/hinic: not in enabled drivers build config 00:02:46.220 net/hns3: not in enabled drivers build config 00:02:46.220 net/iavf: not in enabled drivers build config 00:02:46.220 net/ice: not in enabled drivers build config 00:02:46.220 net/idpf: not in enabled drivers build config 00:02:46.220 net/igc: not in enabled drivers build config 00:02:46.220 net/ionic: not in enabled drivers build config 00:02:46.220 net/ipn3ke: not in enabled drivers build config 00:02:46.220 net/ixgbe: not in enabled drivers build config 00:02:46.220 net/kni: not in enabled drivers build config 00:02:46.220 net/liquidio: not in enabled drivers build config 00:02:46.220 net/mana: not in enabled drivers build config 00:02:46.220 net/memif: not in enabled drivers build config 00:02:46.220 net/mlx4: not in enabled drivers build config 00:02:46.220 net/mlx5: not in enabled drivers build config 00:02:46.220 net/mvneta: not in enabled drivers build config 00:02:46.220 net/mvpp2: not in enabled drivers build config 00:02:46.220 net/netvsc: not in enabled drivers build config 00:02:46.220 net/nfb: not in enabled drivers build config 00:02:46.220 net/nfp: not in enabled drivers build config 00:02:46.220 net/ngbe: not in enabled drivers build config 00:02:46.220 net/null: not in enabled drivers build config 00:02:46.220 net/octeontx: not in enabled drivers build config 00:02:46.220 net/octeon_ep: not in enabled drivers build config 00:02:46.220 net/pcap: not in enabled drivers build config 00:02:46.220 net/pfe: not in enabled drivers build config 00:02:46.220 net/qede: not in enabled drivers build config 00:02:46.220 net/ring: not in enabled drivers build config 00:02:46.220 net/sfc: not in enabled drivers build config 00:02:46.220 net/softnic: not in enabled drivers build config 00:02:46.220 net/tap: not in enabled drivers build config 00:02:46.220 net/thunderx: not in enabled drivers build config 00:02:46.220 net/txgbe: not in enabled drivers build config 00:02:46.220 net/vdev_netvsc: not in enabled drivers build config 00:02:46.220 net/vhost: not in enabled drivers build config 00:02:46.220 net/virtio: not in enabled drivers build config 00:02:46.220 net/vmxnet3: not in enabled drivers build config 00:02:46.220 raw/cnxk_bphy: not in enabled drivers build config 00:02:46.220 raw/cnxk_gpio: not in enabled drivers build config 00:02:46.220 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:46.220 raw/ifpga: not in enabled drivers build config 00:02:46.220 raw/ntb: not in enabled drivers build config 00:02:46.220 raw/skeleton: not in enabled drivers build config 00:02:46.220 crypto/armv8: not in enabled drivers build config 00:02:46.220 crypto/bcmfs: not in enabled drivers build config 00:02:46.220 crypto/caam_jr: not in enabled drivers build config 00:02:46.220 crypto/ccp: not in enabled drivers build config 00:02:46.220 crypto/cnxk: not in enabled drivers build config 00:02:46.220 crypto/dpaa_sec: not in enabled drivers build config 00:02:46.220 crypto/dpaa2_sec: not in enabled drivers build config 00:02:46.220 crypto/ipsec_mb: not in enabled drivers build config 00:02:46.220 crypto/mlx5: not in enabled drivers build config 00:02:46.220 crypto/mvsam: not in enabled drivers build config 00:02:46.220 crypto/nitrox: not in enabled drivers build config 00:02:46.220 crypto/null: not in enabled drivers build config 00:02:46.220 crypto/octeontx: not in enabled drivers build config 00:02:46.220 crypto/openssl: not in enabled drivers build config 00:02:46.220 crypto/scheduler: not in enabled drivers build config 00:02:46.220 crypto/uadk: not in enabled drivers build config 00:02:46.220 crypto/virtio: not in enabled drivers build config 00:02:46.220 compress/isal: not in enabled drivers build config 00:02:46.220 compress/mlx5: not in enabled drivers build config 00:02:46.220 compress/octeontx: not in enabled drivers build config 00:02:46.220 compress/zlib: not in enabled drivers build config 00:02:46.220 regex/mlx5: not in enabled drivers build config 00:02:46.220 regex/cn9k: not in enabled drivers build config 00:02:46.220 vdpa/ifc: not in enabled drivers build config 00:02:46.220 vdpa/mlx5: not in enabled drivers build config 00:02:46.220 vdpa/sfc: not in enabled drivers build config 00:02:46.220 event/cnxk: not in enabled drivers build config 00:02:46.220 event/dlb2: not in enabled drivers build config 00:02:46.220 event/dpaa: not in enabled drivers build config 00:02:46.220 event/dpaa2: not in enabled drivers build config 00:02:46.220 event/dsw: not in enabled drivers build config 00:02:46.220 event/opdl: not in enabled drivers build config 00:02:46.220 event/skeleton: not in enabled drivers build config 00:02:46.220 event/sw: not in enabled drivers build config 00:02:46.220 event/octeontx: not in enabled drivers build config 00:02:46.220 baseband/acc: not in enabled drivers build config 00:02:46.220 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:46.220 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:46.220 baseband/la12xx: not in enabled drivers build config 00:02:46.220 baseband/null: not in enabled drivers build config 00:02:46.220 baseband/turbo_sw: not in enabled drivers build config 00:02:46.220 gpu/cuda: not in enabled drivers build config 00:02:46.220 00:02:46.220 00:02:46.220 Build targets in project: 311 00:02:46.220 00:02:46.220 DPDK 22.11.4 00:02:46.220 00:02:46.220 User defined options 00:02:46.220 libdir : lib 00:02:46.220 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:46.220 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:46.220 c_link_args : 00:02:46.220 enable_docs : false 00:02:46.220 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:46.220 enable_kmods : false 00:02:46.220 machine : native 00:02:46.220 tests : false 00:02:46.220 00:02:46.220 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:46.221 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:46.221 18:20:46 build_native_dpdk -- common/autobuild_common.sh@189 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:46.221 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:46.221 [1/740] Generating lib/rte_kvargs_mingw with a custom command 00:02:46.221 [2/740] Generating lib/rte_telemetry_def with a custom command 00:02:46.221 [3/740] Generating lib/rte_telemetry_mingw with a custom command 00:02:46.221 [4/740] Generating lib/rte_kvargs_def with a custom command 00:02:46.478 [5/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:46.478 [6/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:46.478 [7/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:46.478 [8/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:46.479 [9/740] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:46.479 [10/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:46.479 [11/740] Linking static target lib/librte_kvargs.a 00:02:46.479 [12/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:46.479 [13/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:46.479 [14/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:46.737 [15/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:46.737 [16/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:46.737 [17/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:46.737 [18/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:46.737 [19/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:02:46.737 [20/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:46.737 [21/740] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.737 [22/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:46.737 [23/740] Linking target lib/librte_kvargs.so.23.0 00:02:46.737 [24/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:46.737 [25/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:46.737 [26/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:46.737 [27/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:46.737 [28/740] Linking static target lib/librte_telemetry.a 00:02:46.995 [29/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:46.995 [30/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:46.995 [31/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:46.995 [32/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:46.995 [33/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:46.995 [34/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:46.995 [35/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:47.253 [36/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:47.253 [37/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:47.253 [38/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:47.253 [39/740] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:02:47.253 [40/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:47.253 [41/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:47.253 [42/740] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.253 [43/740] Linking target lib/librte_telemetry.so.23.0 00:02:47.253 [44/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:47.511 [45/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:47.511 [46/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:47.511 [47/740] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:02:47.511 [48/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:47.511 [49/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:47.511 [50/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:47.511 [51/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:47.511 [52/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:47.511 [53/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:47.511 [54/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:47.511 [55/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:47.769 [56/740] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:47.769 [57/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:47.769 [58/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:47.769 [59/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:47.769 [60/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:47.769 [61/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:47.769 [62/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:47.769 [63/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:47.769 [64/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:47.769 [65/740] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:47.769 [66/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:02:47.769 [67/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:47.769 [68/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:47.769 [69/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:47.769 [70/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:47.769 [71/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:48.027 [72/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:48.027 [73/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:48.027 [74/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:48.027 [75/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:48.027 [76/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:48.027 [77/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:48.027 [78/740] Generating lib/rte_eal_mingw with a custom command 00:02:48.027 [79/740] Generating lib/rte_eal_def with a custom command 00:02:48.027 [80/740] Generating lib/rte_ring_def with a custom command 00:02:48.027 [81/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:48.027 [82/740] Generating lib/rte_ring_mingw with a custom command 00:02:48.027 [83/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:48.027 [84/740] Generating lib/rte_rcu_mingw with a custom command 00:02:48.027 [85/740] Generating lib/rte_rcu_def with a custom command 00:02:48.027 [86/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:48.286 [87/740] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:48.286 [88/740] Linking static target lib/librte_ring.a 00:02:48.286 [89/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:48.286 [90/740] Generating lib/rte_mempool_def with a custom command 00:02:48.286 [91/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:48.286 [92/740] Generating lib/rte_mempool_mingw with a custom command 00:02:48.286 [93/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:48.544 [94/740] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.544 [95/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:48.544 [96/740] Linking static target lib/librte_eal.a 00:02:48.802 [97/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:48.802 [98/740] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:48.802 [99/740] Generating lib/rte_mbuf_def with a custom command 00:02:48.802 [100/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:48.802 [101/740] Generating lib/rte_mbuf_mingw with a custom command 00:02:48.802 [102/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:48.802 [103/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:48.802 [104/740] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:48.802 [105/740] Linking static target lib/librte_rcu.a 00:02:49.060 [106/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:49.060 [107/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:49.060 [108/740] Linking static target lib/librte_mempool.a 00:02:49.318 [109/740] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.318 [110/740] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:49.318 [111/740] Generating lib/rte_net_mingw with a custom command 00:02:49.318 [112/740] Generating lib/rte_net_def with a custom command 00:02:49.318 [113/740] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:49.318 [114/740] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:49.318 [115/740] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:49.318 [116/740] Generating lib/rte_meter_def with a custom command 00:02:49.318 [117/740] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:49.318 [118/740] Generating lib/rte_meter_mingw with a custom command 00:02:49.318 [119/740] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:49.318 [120/740] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:49.318 [121/740] Linking static target lib/librte_net.a 00:02:49.318 [122/740] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:49.318 [123/740] Linking static target lib/librte_meter.a 00:02:49.576 [124/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:49.576 [125/740] Linking static target lib/librte_mbuf.a 00:02:49.857 [126/740] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.857 [127/740] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.857 [128/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:49.857 [129/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:50.118 [130/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:50.118 [131/740] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.118 [132/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:50.377 [133/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:50.377 [134/740] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.377 [135/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:50.635 [136/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:50.635 [137/740] Generating lib/rte_ethdev_def with a custom command 00:02:50.635 [138/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:50.635 [139/740] Generating lib/rte_ethdev_mingw with a custom command 00:02:50.635 [140/740] Generating lib/rte_pci_def with a custom command 00:02:50.635 [141/740] Generating lib/rte_pci_mingw with a custom command 00:02:50.635 [142/740] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:50.635 [143/740] Linking static target lib/librte_pci.a 00:02:50.635 [144/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:50.635 [145/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:50.892 [146/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:50.892 [147/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:50.892 [148/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:50.892 [149/740] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.892 [150/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:50.892 [151/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:50.892 [152/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:51.151 [153/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:51.151 [154/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:51.151 [155/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:51.151 [156/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:51.151 [157/740] Generating lib/rte_cmdline_def with a custom command 00:02:51.151 [158/740] Generating lib/rte_cmdline_mingw with a custom command 00:02:51.151 [159/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:51.151 [160/740] Generating lib/rte_metrics_def with a custom command 00:02:51.151 [161/740] Generating lib/rte_metrics_mingw with a custom command 00:02:51.409 [162/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:51.409 [163/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:51.409 [164/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:51.409 [165/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:51.409 [166/740] Linking static target lib/librte_cmdline.a 00:02:51.409 [167/740] Generating lib/rte_hash_def with a custom command 00:02:51.409 [168/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:51.409 [169/740] Generating lib/rte_hash_mingw with a custom command 00:02:51.409 [170/740] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:51.409 [171/740] Generating lib/rte_timer_def with a custom command 00:02:51.409 [172/740] Generating lib/rte_timer_mingw with a custom command 00:02:51.409 [173/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:51.668 [174/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:51.668 [175/740] Linking static target lib/librte_metrics.a 00:02:51.926 [176/740] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:51.926 [177/740] Linking static target lib/librte_timer.a 00:02:52.185 [178/740] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.185 [179/740] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:52.185 [180/740] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:52.443 [181/740] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.443 [182/740] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.701 [183/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:52.701 [184/740] Linking static target lib/librte_ethdev.a 00:02:52.701 [185/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:52.701 [186/740] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:52.701 [187/740] Generating lib/rte_acl_def with a custom command 00:02:52.701 [188/740] Generating lib/rte_acl_mingw with a custom command 00:02:52.701 [189/740] Generating lib/rte_bbdev_def with a custom command 00:02:52.959 [190/740] Generating lib/rte_bbdev_mingw with a custom command 00:02:52.959 [191/740] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:52.959 [192/740] Generating lib/rte_bitratestats_def with a custom command 00:02:52.959 [193/740] Generating lib/rte_bitratestats_mingw with a custom command 00:02:53.524 [194/740] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:53.524 [195/740] Linking static target lib/librte_bitratestats.a 00:02:53.524 [196/740] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:53.524 [197/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:53.524 [198/740] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:53.524 [199/740] Linking static target lib/librte_bbdev.a 00:02:53.524 [200/740] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.088 [201/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:54.088 [202/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:54.088 [203/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:54.346 [204/740] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.346 [205/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:54.346 [206/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:54.912 [207/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:54.912 [208/740] Generating lib/rte_bpf_def with a custom command 00:02:54.912 [209/740] Generating lib/rte_bpf_mingw with a custom command 00:02:54.912 [210/740] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:54.912 [211/740] Linking static target lib/librte_hash.a 00:02:54.912 [212/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:55.171 [213/740] Generating lib/rte_cfgfile_def with a custom command 00:02:55.171 [214/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:55.171 [215/740] Generating lib/rte_cfgfile_mingw with a custom command 00:02:55.171 [216/740] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:55.171 [217/740] Linking static target lib/librte_cfgfile.a 00:02:55.428 [218/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:55.428 [219/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:55.428 [220/740] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.428 [221/740] Generating lib/rte_compressdev_def with a custom command 00:02:55.686 [222/740] Generating lib/rte_compressdev_mingw with a custom command 00:02:55.686 [223/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:55.686 [224/740] Linking static target lib/librte_bpf.a 00:02:55.686 [225/740] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.686 [226/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:55.686 [227/740] Generating lib/rte_cryptodev_def with a custom command 00:02:55.686 [228/740] Generating lib/rte_cryptodev_mingw with a custom command 00:02:55.944 [229/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:02:55.944 [230/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:02:55.944 [231/740] Linking static target lib/librte_acl.a 00:02:55.944 [232/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:55.944 [233/740] Linking static target lib/librte_compressdev.a 00:02:55.944 [234/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:55.944 [235/740] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.202 [236/740] Generating lib/rte_distributor_def with a custom command 00:02:56.202 [237/740] Generating lib/rte_distributor_mingw with a custom command 00:02:56.202 [238/740] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.202 [239/740] Generating lib/rte_efd_def with a custom command 00:02:56.202 [240/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:56.202 [241/740] Generating lib/rte_efd_mingw with a custom command 00:02:56.460 [242/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:56.460 [243/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:56.720 [244/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:56.720 [245/740] Linking static target lib/librte_distributor.a 00:02:56.720 [246/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:56.978 [247/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:56.978 [248/740] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.978 [249/740] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.236 [250/740] Linking target lib/librte_eal.so.23.0 00:02:57.236 [251/740] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.236 [252/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:57.495 [253/740] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:02:57.495 [254/740] Generating lib/rte_eventdev_def with a custom command 00:02:57.495 [255/740] Linking target lib/librte_meter.so.23.0 00:02:57.495 [256/740] Linking target lib/librte_ring.so.23.0 00:02:57.753 [257/740] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:02:57.753 [258/740] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:02:57.753 [259/740] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:57.753 [260/740] Linking target lib/librte_rcu.so.23.0 00:02:57.753 [261/740] Linking target lib/librte_mempool.so.23.0 00:02:57.753 [262/740] Linking target lib/librte_pci.so.23.0 00:02:58.012 [263/740] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:02:58.012 [264/740] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:02:58.012 [265/740] Linking target lib/librte_timer.so.23.0 00:02:58.012 [266/740] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:02:58.012 [267/740] Linking target lib/librte_mbuf.so.23.0 00:02:58.012 [268/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:58.012 [269/740] Linking target lib/librte_acl.so.23.0 00:02:58.012 [270/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:58.012 [271/740] Linking target lib/librte_cfgfile.so.23.0 00:02:58.012 [272/740] Linking static target lib/librte_efd.a 00:02:58.269 [273/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:58.269 [274/740] Linking static target lib/librte_cryptodev.a 00:02:58.269 [275/740] Generating lib/rte_eventdev_mingw with a custom command 00:02:58.269 [276/740] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:02:58.269 [277/740] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:02:58.269 [278/740] Linking target lib/librte_net.so.23.0 00:02:58.269 [279/740] Linking target lib/librte_bbdev.so.23.0 00:02:58.269 [280/740] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.269 [281/740] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:02:58.269 [282/740] Linking target lib/librte_compressdev.so.23.0 00:02:58.269 [283/740] Generating lib/rte_gpudev_def with a custom command 00:02:58.269 [284/740] Linking target lib/librte_distributor.so.23.0 00:02:58.269 [285/740] Generating lib/rte_gpudev_mingw with a custom command 00:02:58.528 [286/740] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.528 [287/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:58.528 [288/740] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:02:58.528 [289/740] Linking target lib/librte_ethdev.so.23.0 00:02:58.785 [290/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:58.785 [291/740] Linking target lib/librte_cmdline.so.23.0 00:02:58.785 [292/740] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:02:58.785 [293/740] Linking target lib/librte_hash.so.23.0 00:02:58.785 [294/740] Linking target lib/librte_metrics.so.23.0 00:02:59.043 [295/740] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:59.043 [296/740] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:02:59.043 [297/740] Linking target lib/librte_bpf.so.23.0 00:02:59.043 [298/740] Linking target lib/librte_efd.so.23.0 00:02:59.043 [299/740] Linking static target lib/librte_gpudev.a 00:02:59.043 [300/740] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:02:59.043 [301/740] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:59.043 [302/740] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:59.043 [303/740] Generating lib/rte_gro_def with a custom command 00:02:59.043 [304/740] Generating lib/rte_gro_mingw with a custom command 00:02:59.043 [305/740] Linking target lib/librte_bitratestats.so.23.0 00:02:59.043 [306/740] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:59.301 [307/740] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:02:59.301 [308/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:59.559 [309/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:59.559 [310/740] Linking static target lib/librte_gro.a 00:02:59.559 [311/740] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:59.559 [312/740] Generating lib/rte_gso_def with a custom command 00:02:59.559 [313/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:59.559 [314/740] Generating lib/rte_gso_mingw with a custom command 00:02:59.817 [315/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:59.817 [316/740] Linking static target lib/librte_eventdev.a 00:02:59.817 [317/740] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.817 [318/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:59.817 [319/740] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:59.817 [320/740] Linking target lib/librte_gro.so.23.0 00:03:00.075 [321/740] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:03:00.075 [322/740] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:03:00.075 [323/740] Linking static target lib/librte_gso.a 00:03:00.075 [324/740] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.075 [325/740] Linking target lib/librte_gpudev.so.23.0 00:03:00.075 [326/740] Generating lib/rte_ip_frag_def with a custom command 00:03:00.075 [327/740] Generating lib/rte_ip_frag_mingw with a custom command 00:03:00.333 [328/740] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.333 [329/740] Linking target lib/librte_gso.so.23.0 00:03:00.333 [330/740] Generating lib/rte_jobstats_def with a custom command 00:03:00.333 [331/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:03:00.333 [332/740] Generating lib/rte_jobstats_mingw with a custom command 00:03:00.333 [333/740] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:03:00.333 [334/740] Linking static target lib/librte_jobstats.a 00:03:00.333 [335/740] Generating lib/rte_latencystats_def with a custom command 00:03:00.333 [336/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:03:00.333 [337/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:03:00.333 [338/740] Generating lib/rte_latencystats_mingw with a custom command 00:03:00.333 [339/740] Generating lib/rte_lpm_def with a custom command 00:03:00.333 [340/740] Generating lib/rte_lpm_mingw with a custom command 00:03:00.591 [341/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:03:00.591 [342/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:03:00.591 [343/740] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.849 [344/740] Linking target lib/librte_jobstats.so.23.0 00:03:00.849 [345/740] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.849 [346/740] Linking target lib/librte_cryptodev.so.23.0 00:03:00.849 [347/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:03:00.849 [348/740] Linking static target lib/librte_ip_frag.a 00:03:00.849 [349/740] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:03:00.849 [350/740] Linking static target lib/librte_latencystats.a 00:03:00.849 [351/740] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:03:01.107 [352/740] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:03:01.107 [353/740] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:03:01.107 [354/740] Linking static target lib/member/libsketch_avx512_tmp.a 00:03:01.107 [355/740] Generating lib/rte_member_def with a custom command 00:03:01.107 [356/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:03:01.107 [357/740] Generating lib/rte_member_mingw with a custom command 00:03:01.107 [358/740] Generating lib/rte_pcapng_def with a custom command 00:03:01.107 [359/740] Generating lib/rte_pcapng_mingw with a custom command 00:03:01.364 [360/740] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:01.364 [361/740] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.364 [362/740] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.364 [363/740] Linking target lib/librte_ip_frag.so.23.0 00:03:01.364 [364/740] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:01.364 [365/740] Linking target lib/librte_latencystats.so.23.0 00:03:01.364 [366/740] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:03:01.622 [367/740] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:01.622 [368/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:03:01.622 [369/740] Linking static target lib/librte_lpm.a 00:03:01.622 [370/740] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:01.880 [371/740] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:03:01.880 [372/740] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:03:01.880 [373/740] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:03:01.880 [374/740] Linking static target lib/librte_pcapng.a 00:03:01.880 [375/740] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:02.138 [376/740] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:03:02.138 [377/740] Generating lib/rte_power_def with a custom command 00:03:02.138 [378/740] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.138 [379/740] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:02.138 [380/740] Generating lib/rte_power_mingw with a custom command 00:03:02.138 [381/740] Linking target lib/librte_lpm.so.23.0 00:03:02.138 [382/740] Generating lib/rte_rawdev_mingw with a custom command 00:03:02.138 [383/740] Generating lib/rte_rawdev_def with a custom command 00:03:02.138 [384/740] Generating lib/rte_regexdev_def with a custom command 00:03:02.138 [385/740] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.138 [386/740] Generating lib/rte_regexdev_mingw with a custom command 00:03:02.396 [387/740] Linking target lib/librte_eventdev.so.23.0 00:03:02.396 [388/740] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:03:02.396 [389/740] Generating lib/rte_dmadev_def with a custom command 00:03:02.396 [390/740] Generating lib/rte_dmadev_mingw with a custom command 00:03:02.396 [391/740] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.396 [392/740] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:03:02.396 [393/740] Linking target lib/librte_pcapng.so.23.0 00:03:02.653 [394/740] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:02.653 [395/740] Generating lib/rte_rib_def with a custom command 00:03:02.653 [396/740] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:03:02.653 [397/740] Generating lib/rte_rib_mingw with a custom command 00:03:02.653 [398/740] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:03:02.653 [399/740] Linking static target lib/librte_rawdev.a 00:03:02.653 [400/740] Generating lib/rte_reorder_def with a custom command 00:03:02.653 [401/740] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:03:02.653 [402/740] Generating lib/rte_reorder_mingw with a custom command 00:03:02.653 [403/740] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:03:02.653 [404/740] Linking static target lib/librte_regexdev.a 00:03:02.912 [405/740] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:03:02.912 [406/740] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:02.912 [407/740] Linking static target lib/librte_dmadev.a 00:03:02.912 [408/740] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:02.912 [409/740] Linking static target lib/librte_power.a 00:03:03.171 [410/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:03:03.171 [411/740] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:03:03.171 [412/740] Generating lib/rte_sched_def with a custom command 00:03:03.171 [413/740] Generating lib/rte_sched_mingw with a custom command 00:03:03.171 [414/740] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:03:03.171 [415/740] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.171 [416/740] Generating lib/rte_security_def with a custom command 00:03:03.171 [417/740] Generating lib/rte_security_mingw with a custom command 00:03:03.171 [418/740] Linking target lib/librte_rawdev.so.23.0 00:03:03.495 [419/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:03:03.495 [420/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:03:03.495 [421/740] Linking static target lib/librte_rib.a 00:03:03.495 [422/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:03:03.495 [423/740] Generating lib/rte_stack_def with a custom command 00:03:03.495 [424/740] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:03.495 [425/740] Generating lib/rte_stack_mingw with a custom command 00:03:03.495 [426/740] Linking static target lib/librte_reorder.a 00:03:03.495 [427/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:03:03.495 [428/740] Linking static target lib/librte_stack.a 00:03:03.776 [429/740] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.776 [430/740] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:03:03.776 [431/740] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.776 [432/740] Linking static target lib/librte_member.a 00:03:03.776 [433/740] Linking target lib/librte_regexdev.so.23.0 00:03:03.776 [434/740] Linking target lib/librte_dmadev.so.23.0 00:03:03.776 [435/740] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:03.776 [436/740] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.776 [437/740] Linking target lib/librte_stack.so.23.0 00:03:03.776 [438/740] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.776 [439/740] Linking target lib/librte_reorder.so.23.0 00:03:03.776 [440/740] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:03:03.776 [441/740] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.035 [442/740] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:04.035 [443/740] Linking static target lib/librte_security.a 00:03:04.035 [444/740] Linking target lib/librte_rib.so.23.0 00:03:04.035 [445/740] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.035 [446/740] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.035 [447/740] Linking target lib/librte_member.so.23.0 00:03:04.035 [448/740] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:03:04.035 [449/740] Linking target lib/librte_power.so.23.0 00:03:04.295 [450/740] Generating lib/rte_vhost_def with a custom command 00:03:04.295 [451/740] Generating lib/rte_vhost_mingw with a custom command 00:03:04.295 [452/740] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:04.295 [453/740] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:04.295 [454/740] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.554 [455/740] Linking target lib/librte_security.so.23.0 00:03:04.554 [456/740] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:04.554 [457/740] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:03:04.812 [458/740] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:03:04.812 [459/740] Linking static target lib/librte_sched.a 00:03:05.070 [460/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:03:05.070 [461/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:05.328 [462/740] Generating lib/rte_ipsec_def with a custom command 00:03:05.328 [463/740] Generating lib/rte_ipsec_mingw with a custom command 00:03:05.328 [464/740] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:03:05.328 [465/740] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.328 [466/740] Linking target lib/librte_sched.so.23.0 00:03:05.328 [467/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:03:05.586 [468/740] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:03:05.586 [469/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:05.586 [470/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:03:05.844 [471/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:03:05.844 [472/740] Generating lib/rte_fib_def with a custom command 00:03:05.844 [473/740] Generating lib/rte_fib_mingw with a custom command 00:03:05.844 [474/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:03:06.408 [475/740] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:03:06.408 [476/740] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:03:06.409 [477/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:03:06.667 [478/740] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:03:06.667 [479/740] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:03:06.667 [480/740] Linking static target lib/librte_fib.a 00:03:06.945 [481/740] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:03:06.945 [482/740] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:03:07.238 [483/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:03:07.238 [484/740] Linking static target lib/librte_ipsec.a 00:03:07.238 [485/740] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:03:07.238 [486/740] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.238 [487/740] Linking target lib/librte_fib.so.23.0 00:03:07.238 [488/740] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:03:07.496 [489/740] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:03:07.496 [490/740] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.496 [491/740] Linking target lib/librte_ipsec.so.23.0 00:03:08.060 [492/740] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:03:08.060 [493/740] Generating lib/rte_port_def with a custom command 00:03:08.060 [494/740] Generating lib/rte_port_mingw with a custom command 00:03:08.322 [495/740] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:03:08.322 [496/740] Generating lib/rte_pdump_def with a custom command 00:03:08.322 [497/740] Generating lib/rte_pdump_mingw with a custom command 00:03:08.322 [498/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:03:08.322 [499/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:03:08.322 [500/740] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:03:08.581 [501/740] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:03:08.581 [502/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:03:08.838 [503/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:03:08.838 [504/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:03:08.838 [505/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:03:08.838 [506/740] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:03:09.097 [507/740] Linking static target lib/librte_port.a 00:03:09.097 [508/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:03:09.097 [509/740] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:03:09.353 [510/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:03:09.353 [511/740] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:03:09.353 [512/740] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:03:09.353 [513/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:03:09.353 [514/740] Linking static target lib/librte_pdump.a 00:03:09.610 [515/740] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.610 [516/740] Linking target lib/librte_port.so.23.0 00:03:09.869 [517/740] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.869 [518/740] Linking target lib/librte_pdump.so.23.0 00:03:09.869 [519/740] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:03:09.869 [520/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:03:10.127 [521/740] Generating lib/rte_table_def with a custom command 00:03:10.127 [522/740] Generating lib/rte_table_mingw with a custom command 00:03:10.127 [523/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:03:10.386 [524/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:03:10.386 [525/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:03:10.386 [526/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:03:10.386 [527/740] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:03:10.386 [528/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:03:10.386 [529/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:03:10.386 [530/740] Generating lib/rte_pipeline_def with a custom command 00:03:10.645 [531/740] Linking static target lib/librte_table.a 00:03:10.645 [532/740] Generating lib/rte_pipeline_mingw with a custom command 00:03:10.645 [533/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:03:10.904 [534/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:03:11.163 [535/740] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:03:11.163 [536/740] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:03:11.163 [537/740] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.163 [538/740] Linking target lib/librte_table.so.23.0 00:03:11.437 [539/740] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:03:11.437 [540/740] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:03:11.437 [541/740] Generating lib/rte_graph_def with a custom command 00:03:11.437 [542/740] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:03:11.437 [543/740] Generating lib/rte_graph_mingw with a custom command 00:03:11.695 [544/740] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:03:11.954 [545/740] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:03:11.954 [546/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:03:11.954 [547/740] Linking static target lib/librte_graph.a 00:03:11.954 [548/740] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:03:12.213 [549/740] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:03:12.213 [550/740] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:03:12.471 [551/740] Compiling C object lib/librte_node.a.p/node_null.c.o 00:03:12.471 [552/740] Compiling C object lib/librte_node.a.p/node_log.c.o 00:03:12.729 [553/740] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.729 [554/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:03:12.729 [555/740] Linking target lib/librte_graph.so.23.0 00:03:12.729 [556/740] Generating lib/rte_node_def with a custom command 00:03:12.729 [557/740] Generating lib/rte_node_mingw with a custom command 00:03:12.729 [558/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:12.729 [559/740] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:03:12.988 [560/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:12.988 [561/740] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:03:12.988 [562/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:12.988 [563/740] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:03:12.988 [564/740] Generating drivers/rte_bus_pci_def with a custom command 00:03:12.988 [565/740] Generating drivers/rte_bus_pci_mingw with a custom command 00:03:12.988 [566/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:13.247 [567/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:13.247 [568/740] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:03:13.247 [569/740] Generating drivers/rte_bus_vdev_def with a custom command 00:03:13.247 [570/740] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:03:13.247 [571/740] Generating drivers/rte_bus_vdev_mingw with a custom command 00:03:13.247 [572/740] Linking static target lib/librte_node.a 00:03:13.247 [573/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:13.247 [574/740] Generating drivers/rte_mempool_ring_def with a custom command 00:03:13.247 [575/740] Generating drivers/rte_mempool_ring_mingw with a custom command 00:03:13.522 [576/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:13.522 [577/740] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:13.522 [578/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:13.522 [579/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:13.522 [580/740] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:13.522 [581/740] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.522 [582/740] Linking target lib/librte_node.so.23.0 00:03:13.786 [583/740] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:13.786 [584/740] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:13.786 [585/740] Linking static target drivers/librte_bus_vdev.a 00:03:13.786 [586/740] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:13.786 [587/740] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:13.786 [588/740] Linking static target drivers/librte_bus_pci.a 00:03:14.045 [589/740] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.045 [590/740] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:14.045 [591/740] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:14.045 [592/740] Linking target drivers/librte_bus_vdev.so.23.0 00:03:14.045 [593/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:03:14.045 [594/740] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:03:14.046 [595/740] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.046 [596/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:03:14.304 [597/740] Linking target drivers/librte_bus_pci.so.23.0 00:03:14.304 [598/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:03:14.304 [599/740] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:03:14.563 [600/740] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:14.563 [601/740] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:14.563 [602/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:03:14.821 [603/740] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:14.821 [604/740] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:14.821 [605/740] Linking static target drivers/librte_mempool_ring.a 00:03:14.821 [606/740] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:14.821 [607/740] Linking target drivers/librte_mempool_ring.so.23.0 00:03:15.079 [608/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:15.337 [609/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:15.337 [610/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:15.337 [611/740] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:16.282 [612/740] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:16.282 [613/740] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:16.282 [614/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:16.282 [615/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:16.540 [616/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:16.799 [617/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:16.799 [618/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:17.059 [619/740] Generating drivers/rte_net_i40e_def with a custom command 00:03:17.059 [620/740] Generating drivers/rte_net_i40e_mingw with a custom command 00:03:17.059 [621/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:17.317 [622/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:17.885 [623/740] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:17.885 [624/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:18.144 [625/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:18.144 [626/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:18.144 [627/740] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:18.402 [628/740] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:18.402 [629/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:18.402 [630/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:18.402 [631/740] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:18.971 [632/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:03:18.971 [633/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:18.971 [634/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:19.231 [635/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:19.232 [636/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:19.232 [637/740] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:19.232 [638/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:19.490 [639/740] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:19.749 [640/740] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:19.749 [641/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:19.749 [642/740] Linking static target drivers/librte_net_i40e.a 00:03:19.749 [643/740] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:19.749 [644/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:19.749 [645/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:20.008 [646/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:20.008 [647/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:20.267 [648/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:20.267 [649/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:20.267 [650/740] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.267 [651/740] Linking target drivers/librte_net_i40e.so.23.0 00:03:20.525 [652/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:20.783 [653/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:20.783 [654/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:20.783 [655/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:20.783 [656/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:20.783 [657/740] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:20.783 [658/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:20.783 [659/740] Linking static target lib/librte_vhost.a 00:03:21.041 [660/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:21.041 [661/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:21.299 [662/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:21.299 [663/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:21.299 [664/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:21.299 [665/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:21.559 [666/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:21.559 [667/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:21.817 [668/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:22.076 [669/740] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.076 [670/740] Linking target lib/librte_vhost.so.23.0 00:03:22.334 [671/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:22.334 [672/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:22.334 [673/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:22.593 [674/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:22.853 [675/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:22.853 [676/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:22.853 [677/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:22.853 [678/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:22.853 [679/740] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:23.112 [680/740] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:23.112 [681/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:23.112 [682/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:23.372 [683/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:23.372 [684/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:23.372 [685/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:23.630 [686/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:23.630 [687/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:23.630 [688/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:23.630 [689/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:23.630 [690/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:24.198 [691/740] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:24.198 [692/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:24.198 [693/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:24.198 [694/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:24.456 [695/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:24.456 [696/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:24.715 [697/740] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:24.973 [698/740] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:24.973 [699/740] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:24.973 [700/740] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:25.233 [701/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:25.492 [702/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:25.492 [703/740] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:25.492 [704/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:25.751 [705/740] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:25.751 [706/740] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:26.009 [707/740] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:26.267 [708/740] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:26.267 [709/740] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:26.267 [710/740] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:26.834 [711/740] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:26.834 [712/740] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:26.834 [713/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:26.834 [714/740] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:26.834 [715/740] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:27.093 [716/740] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:27.352 [717/740] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:27.352 [718/740] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:27.611 [719/740] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:28.550 [720/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:28.809 [721/740] Linking static target lib/librte_pipeline.a 00:03:29.068 [722/740] Linking target app/dpdk-pdump 00:03:29.068 [723/740] Linking target app/dpdk-test-compress-perf 00:03:29.068 [724/740] Linking target app/dpdk-test-crypto-perf 00:03:29.068 [725/740] Linking target app/dpdk-test-cmdline 00:03:29.327 [726/740] Linking target app/dpdk-test-bbdev 00:03:29.327 [727/740] Linking target app/dpdk-dumpcap 00:03:29.327 [728/740] Linking target app/dpdk-test-acl 00:03:29.327 [729/740] Linking target app/dpdk-proc-info 00:03:29.327 [730/740] Linking target app/dpdk-test-eventdev 00:03:29.586 [731/740] Linking target app/dpdk-test-fib 00:03:29.586 [732/740] Linking target app/dpdk-test-pipeline 00:03:29.586 [733/740] Linking target app/dpdk-test-flow-perf 00:03:29.586 [734/740] Linking target app/dpdk-test-gpudev 00:03:29.586 [735/740] Linking target app/dpdk-test-regex 00:03:29.586 [736/740] Linking target app/dpdk-test-security-perf 00:03:29.586 [737/740] Linking target app/dpdk-test-sad 00:03:29.844 [738/740] Linking target app/dpdk-testpmd 00:03:33.157 [739/740] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.416 [740/740] Linking target lib/librte_pipeline.so.23.0 00:03:33.416 18:21:33 build_native_dpdk -- common/autobuild_common.sh@190 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:33.416 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:33.416 [0/1] Installing files. 00:03:33.679 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:33.679 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:33.679 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:33.679 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:33.679 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:33.679 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:33.679 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:33.679 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:33.679 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:33.679 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:33.679 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:33.679 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:33.679 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:33.679 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:33.679 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:33.679 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:33.680 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:33.680 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:33.680 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:33.680 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:33.680 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:33.680 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:33.680 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:33.680 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:33.680 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:33.680 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:33.680 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:33.680 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:33.680 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:33.680 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:33.680 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:33.680 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:33.680 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:33.680 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:33.680 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:33.680 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:33.680 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:33.680 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:33.680 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:33.680 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:33.680 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:33.680 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:33.680 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:33.680 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:33.680 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:33.680 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:33.680 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:33.680 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:33.680 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:33.680 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:33.680 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:33.680 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:33.680 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:33.680 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:33.680 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:33.680 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/flow_classify.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:33.680 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/ipv4_rules_file.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:33.680 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:33.680 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:33.680 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:33.680 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:33.680 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:33.680 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:33.680 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:33.680 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.680 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.680 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.680 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.680 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.680 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.680 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.680 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.680 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.680 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.680 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.680 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.680 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.680 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.680 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.680 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.680 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.680 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.680 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.680 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.680 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.680 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.680 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.680 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.680 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.680 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.680 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.680 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.680 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.680 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:33.680 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:33.680 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:33.680 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/kni.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:33.680 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:33.680 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:33.680 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:33.680 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:33.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:33.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:33.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:33.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:33.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:33.681 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:33.681 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:33.681 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:33.681 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:33.681 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:33.681 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:33.681 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:33.681 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:33.681 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:33.681 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:33.681 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:33.681 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:33.681 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:33.681 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:33.681 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:33.681 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:33.681 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:33.681 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:33.681 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:33.681 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:33.681 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:33.681 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:33.681 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:33.681 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:33.681 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:33.681 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:33.682 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:33.682 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:33.682 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:33.682 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:33.682 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:33.682 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:33.682 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:33.682 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.682 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.682 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.682 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.682 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.682 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.682 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.682 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.682 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.682 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.682 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.682 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.682 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.682 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.682 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.682 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.682 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.682 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.682 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.682 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.682 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.682 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.682 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.682 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.682 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.682 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.682 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.682 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.682 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.682 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.682 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.682 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.682 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.682 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:33.682 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:33.682 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:33.682 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:33.682 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:33.682 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:33.682 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:33.682 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:33.682 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:33.682 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:33.682 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:33.682 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:33.682 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:33.682 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:33.682 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:33.682 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:33.682 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:33.682 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:33.682 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:33.682 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:33.682 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:33.682 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:33.682 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:33.682 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:33.682 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:33.682 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:33.682 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:33.682 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:33.682 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:33.682 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:33.682 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:33.682 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:33.682 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:33.682 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:33.682 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:33.682 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:33.682 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:33.682 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.682 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.682 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.682 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.682 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.682 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.682 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.682 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.683 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.683 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.683 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.683 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.683 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.683 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.683 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.683 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.683 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.683 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.683 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.683 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.683 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.683 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.683 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.683 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.683 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.683 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.683 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.683 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.683 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.683 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.683 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.683 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.683 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.683 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.683 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.683 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.683 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:33.683 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:33.683 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:33.683 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:33.683 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:33.683 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:33.683 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:33.683 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:33.683 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:33.683 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:33.683 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:33.683 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:33.683 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:33.683 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:33.683 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:33.683 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:33.683 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:33.683 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:33.683 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:33.683 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:33.683 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:33.683 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:33.683 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:33.683 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:33.683 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:33.683 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:33.683 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:33.683 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:33.683 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:33.683 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:33.683 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:33.683 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:33.683 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:33.683 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:33.683 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:33.683 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:33.683 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:33.683 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:33.683 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:33.683 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:33.683 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:33.683 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:33.683 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:33.683 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:33.683 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:33.683 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:33.683 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:33.683 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:33.683 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:33.683 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:33.683 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:33.683 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:33.683 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:33.683 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:33.683 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:33.683 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:33.683 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:33.683 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:33.684 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:33.684 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:33.684 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:33.684 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:33.684 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:33.684 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:33.684 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:33.684 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:33.684 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:33.684 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:33.684 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:33.684 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:33.684 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:33.684 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:33.684 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:33.684 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:33.684 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:33.684 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:33.684 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:33.684 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:33.684 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:33.684 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.684 Installing lib/librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.684 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.684 Installing lib/librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.684 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.684 Installing lib/librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.684 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.684 Installing lib/librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.684 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.684 Installing lib/librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.684 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.684 Installing lib/librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.684 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.684 Installing lib/librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.684 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.942 Installing lib/librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.942 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.942 Installing lib/librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.942 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.942 Installing lib/librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.942 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.942 Installing lib/librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.942 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.942 Installing lib/librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.942 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.942 Installing lib/librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.942 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.942 Installing lib/librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.942 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.942 Installing lib/librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.942 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.942 Installing lib/librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.942 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.942 Installing lib/librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.942 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.942 Installing lib/librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.942 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.942 Installing lib/librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.942 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.942 Installing lib/librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.942 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.942 Installing lib/librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.942 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.942 Installing lib/librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.942 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.942 Installing lib/librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.942 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.942 Installing lib/librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.942 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.942 Installing lib/librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.942 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.942 Installing lib/librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.942 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.942 Installing lib/librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.942 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.942 Installing lib/librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.942 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.942 Installing lib/librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.942 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.942 Installing lib/librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.942 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.942 Installing lib/librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.942 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.942 Installing lib/librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.942 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.942 Installing lib/librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.942 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.942 Installing lib/librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.942 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.942 Installing lib/librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.942 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.942 Installing lib/librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.942 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.942 Installing lib/librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.942 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.942 Installing lib/librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.942 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.942 Installing lib/librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.942 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.942 Installing lib/librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.942 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.942 Installing lib/librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.942 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.942 Installing lib/librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.942 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.942 Installing lib/librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.942 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.942 Installing lib/librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.942 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.942 Installing lib/librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.942 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.943 Installing lib/librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.943 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.943 Installing lib/librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.943 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.943 Installing lib/librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.943 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.943 Installing lib/librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.943 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.943 Installing lib/librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.943 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.943 Installing lib/librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.943 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.943 Installing lib/librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.943 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.943 Installing drivers/librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:33.943 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.943 Installing drivers/librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:33.943 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.943 Installing drivers/librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:33.943 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.943 Installing drivers/librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:33.943 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:33.943 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:33.943 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:33.943 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:33.943 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:33.943 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:33.943 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:33.943 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:33.943 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.203 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.203 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.203 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.203 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.203 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.203 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.203 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.203 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.203 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.203 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.203 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.203 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:34.203 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:34.203 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:34.203 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:34.203 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:34.203 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:34.203 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:34.203 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:34.203 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:34.203 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:34.203 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:34.203 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:34.203 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.203 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.203 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.203 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.203 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.203 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.203 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.203 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.203 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.203 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.203 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.203 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.203 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.203 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.203 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.203 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.204 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_empty_poll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_intel_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.205 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.206 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.206 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.206 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.206 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.206 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.206 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.206 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.206 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.206 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.206 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.206 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.206 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.206 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.206 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.206 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.206 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.206 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.206 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.206 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.206 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.206 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.206 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.206 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.206 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.206 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.206 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.206 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.206 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.206 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.206 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.206 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.206 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.206 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.206 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.206 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.206 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.206 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.206 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.206 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.206 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.206 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.206 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.206 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.206 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.206 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.206 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.206 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.206 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.206 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.206 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.206 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.206 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.206 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.206 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.206 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.206 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.206 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.206 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.206 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.206 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:34.206 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:34.206 Installing symlink pointing to librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.23 00:03:34.206 Installing symlink pointing to librte_kvargs.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:34.206 Installing symlink pointing to librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.23 00:03:34.206 Installing symlink pointing to librte_telemetry.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:34.206 Installing symlink pointing to librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.23 00:03:34.206 Installing symlink pointing to librte_eal.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:34.206 Installing symlink pointing to librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.23 00:03:34.206 Installing symlink pointing to librte_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:34.206 Installing symlink pointing to librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.23 00:03:34.206 Installing symlink pointing to librte_rcu.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:34.206 Installing symlink pointing to librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.23 00:03:34.206 Installing symlink pointing to librte_mempool.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:34.206 Installing symlink pointing to librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.23 00:03:34.206 Installing symlink pointing to librte_mbuf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:34.206 Installing symlink pointing to librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.23 00:03:34.206 Installing symlink pointing to librte_net.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:34.206 Installing symlink pointing to librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.23 00:03:34.206 Installing symlink pointing to librte_meter.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:34.206 Installing symlink pointing to librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.23 00:03:34.206 Installing symlink pointing to librte_ethdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:34.206 Installing symlink pointing to librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.23 00:03:34.206 Installing symlink pointing to librte_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:34.206 Installing symlink pointing to librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.23 00:03:34.206 Installing symlink pointing to librte_cmdline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:34.206 Installing symlink pointing to librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.23 00:03:34.206 Installing symlink pointing to librte_metrics.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:34.206 Installing symlink pointing to librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.23 00:03:34.206 Installing symlink pointing to librte_hash.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:34.206 Installing symlink pointing to librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.23 00:03:34.206 Installing symlink pointing to librte_timer.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:34.206 Installing symlink pointing to librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.23 00:03:34.206 Installing symlink pointing to librte_acl.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:34.206 Installing symlink pointing to librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.23 00:03:34.206 Installing symlink pointing to librte_bbdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:34.206 Installing symlink pointing to librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.23 00:03:34.206 Installing symlink pointing to librte_bitratestats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:34.206 Installing symlink pointing to librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.23 00:03:34.206 Installing symlink pointing to librte_bpf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:34.206 Installing symlink pointing to librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.23 00:03:34.206 Installing symlink pointing to librte_cfgfile.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:34.206 Installing symlink pointing to librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.23 00:03:34.206 Installing symlink pointing to librte_compressdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:34.206 Installing symlink pointing to librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.23 00:03:34.206 Installing symlink pointing to librte_cryptodev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:34.207 Installing symlink pointing to librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.23 00:03:34.207 Installing symlink pointing to librte_distributor.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:34.207 Installing symlink pointing to librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.23 00:03:34.207 Installing symlink pointing to librte_efd.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:34.207 Installing symlink pointing to librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.23 00:03:34.207 Installing symlink pointing to librte_eventdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:34.207 Installing symlink pointing to librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.23 00:03:34.207 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:03:34.207 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:03:34.207 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:03:34.207 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:03:34.207 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:03:34.207 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:03:34.207 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:03:34.207 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:03:34.207 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:03:34.207 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:03:34.207 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:03:34.207 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:03:34.207 Installing symlink pointing to librte_gpudev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:34.207 Installing symlink pointing to librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.23 00:03:34.207 Installing symlink pointing to librte_gro.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:34.207 Installing symlink pointing to librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.23 00:03:34.207 Installing symlink pointing to librte_gso.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:34.207 Installing symlink pointing to librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.23 00:03:34.207 Installing symlink pointing to librte_ip_frag.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:34.207 Installing symlink pointing to librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.23 00:03:34.207 Installing symlink pointing to librte_jobstats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:34.207 Installing symlink pointing to librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.23 00:03:34.207 Installing symlink pointing to librte_latencystats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:34.207 Installing symlink pointing to librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.23 00:03:34.207 Installing symlink pointing to librte_lpm.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:34.207 Installing symlink pointing to librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.23 00:03:34.207 Installing symlink pointing to librte_member.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:34.207 Installing symlink pointing to librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.23 00:03:34.207 Installing symlink pointing to librte_pcapng.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:34.207 Installing symlink pointing to librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.23 00:03:34.207 Installing symlink pointing to librte_power.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:34.207 Installing symlink pointing to librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.23 00:03:34.207 Installing symlink pointing to librte_rawdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:34.207 Installing symlink pointing to librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.23 00:03:34.207 Installing symlink pointing to librte_regexdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:34.207 Installing symlink pointing to librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.23 00:03:34.207 Installing symlink pointing to librte_dmadev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:34.207 Installing symlink pointing to librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.23 00:03:34.207 Installing symlink pointing to librte_rib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:34.207 Installing symlink pointing to librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.23 00:03:34.207 Installing symlink pointing to librte_reorder.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:34.207 Installing symlink pointing to librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.23 00:03:34.207 Installing symlink pointing to librte_sched.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:34.207 Installing symlink pointing to librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.23 00:03:34.207 Installing symlink pointing to librte_security.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:34.207 Installing symlink pointing to librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.23 00:03:34.207 Installing symlink pointing to librte_stack.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:34.207 Installing symlink pointing to librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.23 00:03:34.207 Installing symlink pointing to librte_vhost.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:34.207 Installing symlink pointing to librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.23 00:03:34.207 Installing symlink pointing to librte_ipsec.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:34.207 Installing symlink pointing to librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.23 00:03:34.207 Installing symlink pointing to librte_fib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:34.207 Installing symlink pointing to librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.23 00:03:34.207 Installing symlink pointing to librte_port.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:34.207 Installing symlink pointing to librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.23 00:03:34.207 Installing symlink pointing to librte_pdump.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:34.207 Installing symlink pointing to librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.23 00:03:34.207 Installing symlink pointing to librte_table.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:34.207 Installing symlink pointing to librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.23 00:03:34.207 Installing symlink pointing to librte_pipeline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:34.207 Installing symlink pointing to librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.23 00:03:34.207 Installing symlink pointing to librte_graph.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:34.207 Installing symlink pointing to librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.23 00:03:34.207 Installing symlink pointing to librte_node.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:34.207 Installing symlink pointing to librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:03:34.207 Installing symlink pointing to librte_bus_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:03:34.207 Installing symlink pointing to librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:03:34.207 Installing symlink pointing to librte_bus_vdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:03:34.207 Installing symlink pointing to librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:03:34.207 Installing symlink pointing to librte_mempool_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:03:34.207 Installing symlink pointing to librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:03:34.207 Installing symlink pointing to librte_net_i40e.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:03:34.207 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:03:34.466 18:21:34 build_native_dpdk -- common/autobuild_common.sh@192 -- $ uname -s 00:03:34.466 18:21:34 build_native_dpdk -- common/autobuild_common.sh@192 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:34.466 18:21:34 build_native_dpdk -- common/autobuild_common.sh@203 -- $ cat 00:03:34.466 18:21:34 build_native_dpdk -- common/autobuild_common.sh@208 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:34.466 00:03:34.466 real 0m55.975s 00:03:34.466 user 5m50.092s 00:03:34.466 sys 1m6.466s 00:03:34.466 ************************************ 00:03:34.466 END TEST build_native_dpdk 00:03:34.466 ************************************ 00:03:34.466 18:21:34 build_native_dpdk -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:03:34.466 18:21:34 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:34.466 18:21:34 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:34.466 18:21:34 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:34.466 18:21:34 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:34.466 18:21:34 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:34.466 18:21:34 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:34.466 18:21:34 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:34.466 18:21:34 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:34.466 18:21:34 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme --with-shared 00:03:34.466 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:34.723 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.723 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:34.723 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:34.982 Using 'verbs' RDMA provider 00:03:51.835 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:04:06.731 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:04:07.299 Creating mk/config.mk...done. 00:04:07.299 Creating mk/cc.flags.mk...done. 00:04:07.299 Type 'make' to build. 00:04:07.299 18:22:07 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:04:07.299 18:22:07 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:04:07.299 18:22:07 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:04:07.299 18:22:07 -- common/autotest_common.sh@10 -- $ set +x 00:04:07.299 ************************************ 00:04:07.299 START TEST make 00:04:07.299 ************************************ 00:04:07.299 18:22:07 make -- common/autotest_common.sh@1121 -- $ make -j10 00:04:07.558 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:04:07.558 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:04:07.558 meson setup builddir \ 00:04:07.558 -Dwith-libaio=enabled \ 00:04:07.558 -Dwith-liburing=enabled \ 00:04:07.558 -Dwith-libvfn=disabled \ 00:04:07.558 -Dwith-spdk=false && \ 00:04:07.558 meson compile -C builddir && \ 00:04:07.558 cd -) 00:04:07.817 make[1]: Nothing to be done for 'all'. 00:04:10.352 The Meson build system 00:04:10.352 Version: 1.3.1 00:04:10.352 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:04:10.352 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:04:10.352 Build type: native build 00:04:10.352 Project name: xnvme 00:04:10.352 Project version: 0.7.3 00:04:10.352 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:04:10.352 C linker for the host machine: gcc ld.bfd 2.39-16 00:04:10.352 Host machine cpu family: x86_64 00:04:10.352 Host machine cpu: x86_64 00:04:10.352 Message: host_machine.system: linux 00:04:10.352 Compiler for C supports arguments -Wno-missing-braces: YES 00:04:10.352 Compiler for C supports arguments -Wno-cast-function-type: YES 00:04:10.352 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:04:10.352 Run-time dependency threads found: YES 00:04:10.352 Has header "setupapi.h" : NO 00:04:10.352 Has header "linux/blkzoned.h" : YES 00:04:10.352 Has header "linux/blkzoned.h" : YES (cached) 00:04:10.352 Has header "libaio.h" : YES 00:04:10.352 Library aio found: YES 00:04:10.352 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:04:10.352 Run-time dependency liburing found: YES 2.2 00:04:10.352 Dependency libvfn skipped: feature with-libvfn disabled 00:04:10.352 Run-time dependency appleframeworks found: NO (tried framework) 00:04:10.352 Run-time dependency appleframeworks found: NO (tried framework) 00:04:10.352 Configuring xnvme_config.h using configuration 00:04:10.352 Configuring xnvme.spec using configuration 00:04:10.352 Run-time dependency bash-completion found: YES 2.11 00:04:10.352 Message: Bash-completions: /usr/share/bash-completion/completions 00:04:10.352 Program cp found: YES (/usr/bin/cp) 00:04:10.352 Has header "winsock2.h" : NO 00:04:10.352 Has header "dbghelp.h" : NO 00:04:10.352 Library rpcrt4 found: NO 00:04:10.352 Library rt found: YES 00:04:10.352 Checking for function "clock_gettime" with dependency -lrt: YES 00:04:10.352 Found CMake: /usr/bin/cmake (3.27.7) 00:04:10.352 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:04:10.352 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:04:10.353 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:04:10.353 Build targets in project: 32 00:04:10.353 00:04:10.353 xnvme 0.7.3 00:04:10.353 00:04:10.353 User defined options 00:04:10.353 with-libaio : enabled 00:04:10.353 with-liburing: enabled 00:04:10.353 with-libvfn : disabled 00:04:10.353 with-spdk : false 00:04:10.353 00:04:10.353 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:10.611 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:04:10.611 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:04:10.611 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:04:10.611 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:04:10.611 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:04:10.611 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:04:10.611 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:04:10.611 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:04:10.869 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:04:10.869 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:04:10.869 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:04:10.869 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:04:10.869 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:04:10.869 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:04:10.869 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:04:10.869 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:04:10.869 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:04:10.869 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:04:10.869 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:04:10.869 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:04:10.869 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:04:10.869 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:04:10.869 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:04:10.869 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:04:10.869 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:04:10.869 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:04:11.127 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:04:11.127 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:04:11.127 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:04:11.127 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:04:11.127 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:04:11.127 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:04:11.127 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:04:11.127 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:04:11.127 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:04:11.127 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:04:11.127 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:04:11.127 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:04:11.127 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:04:11.127 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:04:11.127 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:04:11.127 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:04:11.127 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:04:11.127 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:04:11.127 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:04:11.127 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:04:11.127 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:04:11.127 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:04:11.127 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:04:11.127 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:04:11.127 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:04:11.127 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:04:11.127 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:04:11.127 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:04:11.127 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:04:11.127 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:04:11.127 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:04:11.386 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:04:11.386 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:04:11.386 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:04:11.386 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:04:11.386 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:04:11.386 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:04:11.386 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:04:11.386 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:04:11.386 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:04:11.386 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:04:11.386 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:04:11.386 [68/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:04:11.386 [69/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:04:11.386 [70/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:04:11.386 [71/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:04:11.645 [72/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:04:11.645 [73/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:04:11.645 [74/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:04:11.645 [75/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:04:11.645 [76/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:04:11.645 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:04:11.645 [78/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:04:11.645 [79/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:04:11.645 [80/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:04:11.645 [81/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:04:11.645 [82/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:04:11.645 [83/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:04:11.645 [84/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:04:11.904 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:04:11.904 [86/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:04:11.904 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:04:11.904 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:04:11.904 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:04:11.904 [90/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:04:11.904 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:04:11.905 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:04:11.905 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:04:11.905 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:04:11.905 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:04:11.905 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:04:11.905 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:04:11.905 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:04:11.905 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:04:11.905 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:04:11.905 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:04:11.905 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:04:11.905 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:04:11.905 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:04:11.905 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:04:11.905 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:04:11.905 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:04:11.905 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:04:11.905 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:04:11.905 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:04:11.905 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:04:11.905 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:04:11.905 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:04:12.163 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:04:12.163 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:04:12.163 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:04:12.163 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:04:12.163 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:04:12.163 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:04:12.163 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:04:12.163 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:04:12.163 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:04:12.163 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:04:12.163 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:04:12.163 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:04:12.163 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:04:12.163 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:04:12.163 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:04:12.163 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:04:12.164 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:04:12.164 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:04:12.164 [132/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:04:12.423 [133/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:04:12.423 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:04:12.423 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:04:12.423 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:04:12.423 [137/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:04:12.423 [138/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:04:12.423 [139/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:04:12.423 [140/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:04:12.423 [141/203] Linking target lib/libxnvme.so 00:04:12.423 [142/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:04:12.423 [143/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:04:12.423 [144/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:04:12.423 [145/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:04:12.423 [146/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:04:12.423 [147/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:04:12.423 [148/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:04:12.682 [149/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:04:12.682 [150/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:04:12.682 [151/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:04:12.682 [152/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:04:12.682 [153/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:04:12.682 [154/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:04:12.682 [155/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:04:12.682 [156/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:04:12.682 [157/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:04:12.682 [158/203] Compiling C object tools/lblk.p/lblk.c.o 00:04:12.682 [159/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:04:12.682 [160/203] Compiling C object tools/xdd.p/xdd.c.o 00:04:12.682 [161/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:04:12.942 [162/203] Compiling C object tools/kvs.p/kvs.c.o 00:04:12.942 [163/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:04:12.942 [164/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:04:12.942 [165/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:04:12.942 [166/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:04:12.942 [167/203] Compiling C object tools/zoned.p/zoned.c.o 00:04:12.942 [168/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:04:12.942 [169/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:04:12.942 [170/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:04:12.942 [171/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:04:12.942 [172/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:04:13.201 [173/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:04:13.201 [174/203] Linking static target lib/libxnvme.a 00:04:13.201 [175/203] Linking target tests/xnvme_tests_async_intf 00:04:13.201 [176/203] Linking target tests/xnvme_tests_buf 00:04:13.201 [177/203] Linking target tests/xnvme_tests_xnvme_cli 00:04:13.201 [178/203] Linking target tests/xnvme_tests_lblk 00:04:13.201 [179/203] Linking target tests/xnvme_tests_cli 00:04:13.201 [180/203] Linking target tests/xnvme_tests_enum 00:04:13.201 [181/203] Linking target tests/xnvme_tests_znd_append 00:04:13.201 [182/203] Linking target tests/xnvme_tests_znd_explicit_open 00:04:13.201 [183/203] Linking target tests/xnvme_tests_scc 00:04:13.201 [184/203] Linking target tests/xnvme_tests_ioworker 00:04:13.201 [185/203] Linking target tests/xnvme_tests_xnvme_file 00:04:13.201 [186/203] Linking target tests/xnvme_tests_znd_state 00:04:13.201 [187/203] Linking target tools/lblk 00:04:13.201 [188/203] Linking target tests/xnvme_tests_kvs 00:04:13.201 [189/203] Linking target tests/xnvme_tests_znd_zrwa 00:04:13.201 [190/203] Linking target tools/kvs 00:04:13.201 [191/203] Linking target tools/xdd 00:04:13.201 [192/203] Linking target tests/xnvme_tests_map 00:04:13.201 [193/203] Linking target tools/zoned 00:04:13.201 [194/203] Linking target tools/xnvme_file 00:04:13.201 [195/203] Linking target examples/xnvme_dev 00:04:13.201 [196/203] Linking target tools/xnvme 00:04:13.460 [197/203] Linking target examples/xnvme_hello 00:04:13.460 [198/203] Linking target examples/xnvme_enum 00:04:13.460 [199/203] Linking target examples/xnvme_io_async 00:04:13.460 [200/203] Linking target examples/xnvme_single_async 00:04:13.460 [201/203] Linking target examples/xnvme_single_sync 00:04:13.460 [202/203] Linking target examples/zoned_io_async 00:04:13.460 [203/203] Linking target examples/zoned_io_sync 00:04:13.460 INFO: autodetecting backend as ninja 00:04:13.460 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:04:13.460 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:04:35.416 CC lib/ut_mock/mock.o 00:04:35.416 CC lib/log/log.o 00:04:35.416 CC lib/log/log_flags.o 00:04:35.416 CC lib/log/log_deprecated.o 00:04:35.416 CC lib/ut/ut.o 00:04:35.416 LIB libspdk_ut_mock.a 00:04:35.416 LIB libspdk_log.a 00:04:35.416 SO libspdk_ut_mock.so.6.0 00:04:35.416 LIB libspdk_ut.a 00:04:35.416 SO libspdk_log.so.7.0 00:04:35.416 SO libspdk_ut.so.2.0 00:04:35.416 SYMLINK libspdk_ut_mock.so 00:04:35.416 SYMLINK libspdk_log.so 00:04:35.416 SYMLINK libspdk_ut.so 00:04:35.416 CC lib/util/cpuset.o 00:04:35.416 CC lib/util/base64.o 00:04:35.416 CC lib/util/bit_array.o 00:04:35.416 CC lib/util/crc16.o 00:04:35.416 CC lib/util/crc32.o 00:04:35.416 CC lib/util/crc32c.o 00:04:35.416 CXX lib/trace_parser/trace.o 00:04:35.416 CC lib/ioat/ioat.o 00:04:35.416 CC lib/dma/dma.o 00:04:35.416 CC lib/util/crc32_ieee.o 00:04:35.416 CC lib/vfio_user/host/vfio_user_pci.o 00:04:35.416 CC lib/util/crc64.o 00:04:35.416 CC lib/util/dif.o 00:04:35.416 CC lib/vfio_user/host/vfio_user.o 00:04:35.416 CC lib/util/fd.o 00:04:35.416 LIB libspdk_dma.a 00:04:35.416 CC lib/util/file.o 00:04:35.416 CC lib/util/hexlify.o 00:04:35.416 SO libspdk_dma.so.4.0 00:04:35.416 CC lib/util/iov.o 00:04:35.416 LIB libspdk_ioat.a 00:04:35.416 CC lib/util/math.o 00:04:35.416 SYMLINK libspdk_dma.so 00:04:35.416 SO libspdk_ioat.so.7.0 00:04:35.416 CC lib/util/pipe.o 00:04:35.416 CC lib/util/strerror_tls.o 00:04:35.416 CC lib/util/string.o 00:04:35.416 CC lib/util/uuid.o 00:04:35.416 SYMLINK libspdk_ioat.so 00:04:35.416 CC lib/util/fd_group.o 00:04:35.416 LIB libspdk_vfio_user.a 00:04:35.416 SO libspdk_vfio_user.so.5.0 00:04:35.416 CC lib/util/xor.o 00:04:35.416 CC lib/util/zipf.o 00:04:35.416 SYMLINK libspdk_vfio_user.so 00:04:35.416 LIB libspdk_util.a 00:04:35.416 SO libspdk_util.so.9.0 00:04:35.416 SYMLINK libspdk_util.so 00:04:35.416 LIB libspdk_trace_parser.a 00:04:35.416 SO libspdk_trace_parser.so.5.0 00:04:35.416 SYMLINK libspdk_trace_parser.so 00:04:35.416 CC lib/conf/conf.o 00:04:35.416 CC lib/idxd/idxd.o 00:04:35.416 CC lib/idxd/idxd_user.o 00:04:35.416 CC lib/vmd/led.o 00:04:35.416 CC lib/env_dpdk/env.o 00:04:35.416 CC lib/vmd/vmd.o 00:04:35.416 CC lib/rdma/common.o 00:04:35.416 CC lib/json/json_parse.o 00:04:35.416 CC lib/rdma/rdma_verbs.o 00:04:35.416 CC lib/idxd/idxd_kernel.o 00:04:35.416 CC lib/env_dpdk/memory.o 00:04:35.416 CC lib/env_dpdk/pci.o 00:04:35.416 CC lib/json/json_util.o 00:04:35.416 CC lib/json/json_write.o 00:04:35.416 LIB libspdk_conf.a 00:04:35.416 CC lib/env_dpdk/init.o 00:04:35.416 SO libspdk_conf.so.6.0 00:04:35.416 LIB libspdk_rdma.a 00:04:35.416 SYMLINK libspdk_conf.so 00:04:35.416 CC lib/env_dpdk/threads.o 00:04:35.416 SO libspdk_rdma.so.6.0 00:04:35.416 SYMLINK libspdk_rdma.so 00:04:35.416 CC lib/env_dpdk/pci_ioat.o 00:04:35.416 CC lib/env_dpdk/pci_virtio.o 00:04:35.416 CC lib/env_dpdk/pci_vmd.o 00:04:35.416 CC lib/env_dpdk/pci_idxd.o 00:04:35.416 LIB libspdk_json.a 00:04:35.416 CC lib/env_dpdk/pci_event.o 00:04:35.416 CC lib/env_dpdk/sigbus_handler.o 00:04:35.416 SO libspdk_json.so.6.0 00:04:35.416 CC lib/env_dpdk/pci_dpdk.o 00:04:35.416 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:35.416 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:35.416 SYMLINK libspdk_json.so 00:04:35.416 LIB libspdk_idxd.a 00:04:35.676 SO libspdk_idxd.so.12.0 00:04:35.676 LIB libspdk_vmd.a 00:04:35.676 SO libspdk_vmd.so.6.0 00:04:35.676 SYMLINK libspdk_idxd.so 00:04:35.676 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:35.676 CC lib/jsonrpc/jsonrpc_server.o 00:04:35.676 CC lib/jsonrpc/jsonrpc_client.o 00:04:35.676 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:35.676 SYMLINK libspdk_vmd.so 00:04:35.934 LIB libspdk_jsonrpc.a 00:04:36.193 SO libspdk_jsonrpc.so.6.0 00:04:36.193 SYMLINK libspdk_jsonrpc.so 00:04:36.761 CC lib/rpc/rpc.o 00:04:36.761 LIB libspdk_env_dpdk.a 00:04:36.761 SO libspdk_env_dpdk.so.14.0 00:04:36.761 LIB libspdk_rpc.a 00:04:37.021 SO libspdk_rpc.so.6.0 00:04:37.021 SYMLINK libspdk_env_dpdk.so 00:04:37.021 SYMLINK libspdk_rpc.so 00:04:37.280 CC lib/keyring/keyring.o 00:04:37.280 CC lib/keyring/keyring_rpc.o 00:04:37.280 CC lib/notify/notify.o 00:04:37.280 CC lib/trace/trace.o 00:04:37.280 CC lib/notify/notify_rpc.o 00:04:37.280 CC lib/trace/trace_flags.o 00:04:37.280 CC lib/trace/trace_rpc.o 00:04:37.539 LIB libspdk_notify.a 00:04:37.539 SO libspdk_notify.so.6.0 00:04:37.539 LIB libspdk_trace.a 00:04:37.539 SYMLINK libspdk_notify.so 00:04:37.539 LIB libspdk_keyring.a 00:04:37.798 SO libspdk_trace.so.10.0 00:04:37.798 SO libspdk_keyring.so.1.0 00:04:37.798 SYMLINK libspdk_trace.so 00:04:37.798 SYMLINK libspdk_keyring.so 00:04:38.057 CC lib/thread/thread.o 00:04:38.057 CC lib/thread/iobuf.o 00:04:38.316 CC lib/sock/sock.o 00:04:38.316 CC lib/sock/sock_rpc.o 00:04:38.575 LIB libspdk_sock.a 00:04:38.575 SO libspdk_sock.so.9.0 00:04:38.833 SYMLINK libspdk_sock.so 00:04:39.093 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:39.093 CC lib/nvme/nvme_ctrlr.o 00:04:39.093 CC lib/nvme/nvme_fabric.o 00:04:39.093 CC lib/nvme/nvme_ns_cmd.o 00:04:39.093 CC lib/nvme/nvme_ns.o 00:04:39.093 CC lib/nvme/nvme_pcie_common.o 00:04:39.093 CC lib/nvme/nvme_qpair.o 00:04:39.093 CC lib/nvme/nvme.o 00:04:39.093 CC lib/nvme/nvme_pcie.o 00:04:40.032 CC lib/nvme/nvme_quirks.o 00:04:40.032 LIB libspdk_thread.a 00:04:40.032 CC lib/nvme/nvme_transport.o 00:04:40.032 SO libspdk_thread.so.10.0 00:04:40.032 CC lib/nvme/nvme_discovery.o 00:04:40.032 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:40.032 SYMLINK libspdk_thread.so 00:04:40.032 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:40.032 CC lib/nvme/nvme_tcp.o 00:04:40.032 CC lib/nvme/nvme_opal.o 00:04:40.291 CC lib/accel/accel.o 00:04:40.291 CC lib/nvme/nvme_io_msg.o 00:04:40.550 CC lib/accel/accel_rpc.o 00:04:40.550 CC lib/accel/accel_sw.o 00:04:40.810 CC lib/blob/blobstore.o 00:04:40.810 CC lib/nvme/nvme_poll_group.o 00:04:40.810 CC lib/nvme/nvme_zns.o 00:04:40.810 CC lib/virtio/virtio.o 00:04:40.810 CC lib/init/json_config.o 00:04:41.069 CC lib/blob/request.o 00:04:41.069 CC lib/blob/zeroes.o 00:04:41.069 CC lib/init/subsystem.o 00:04:41.069 CC lib/virtio/virtio_vhost_user.o 00:04:41.069 CC lib/virtio/virtio_vfio_user.o 00:04:41.329 CC lib/init/subsystem_rpc.o 00:04:41.329 CC lib/blob/blob_bs_dev.o 00:04:41.329 CC lib/init/rpc.o 00:04:41.329 CC lib/virtio/virtio_pci.o 00:04:41.329 CC lib/nvme/nvme_stubs.o 00:04:41.329 CC lib/nvme/nvme_auth.o 00:04:41.589 LIB libspdk_accel.a 00:04:41.589 CC lib/nvme/nvme_cuse.o 00:04:41.589 LIB libspdk_init.a 00:04:41.589 CC lib/nvme/nvme_rdma.o 00:04:41.589 SO libspdk_accel.so.15.0 00:04:41.589 SO libspdk_init.so.5.0 00:04:41.589 SYMLINK libspdk_accel.so 00:04:41.589 SYMLINK libspdk_init.so 00:04:41.589 LIB libspdk_virtio.a 00:04:41.880 SO libspdk_virtio.so.7.0 00:04:41.880 CC lib/event/app.o 00:04:41.880 CC lib/event/reactor.o 00:04:41.880 SYMLINK libspdk_virtio.so 00:04:41.880 CC lib/event/log_rpc.o 00:04:41.880 CC lib/bdev/bdev.o 00:04:41.880 CC lib/bdev/bdev_rpc.o 00:04:41.880 CC lib/bdev/bdev_zone.o 00:04:41.880 CC lib/event/app_rpc.o 00:04:42.139 CC lib/bdev/part.o 00:04:42.139 CC lib/bdev/scsi_nvme.o 00:04:42.139 CC lib/event/scheduler_static.o 00:04:42.399 LIB libspdk_event.a 00:04:42.399 SO libspdk_event.so.13.0 00:04:42.658 SYMLINK libspdk_event.so 00:04:43.228 LIB libspdk_nvme.a 00:04:43.228 SO libspdk_nvme.so.13.0 00:04:43.811 SYMLINK libspdk_nvme.so 00:04:44.750 LIB libspdk_blob.a 00:04:44.750 SO libspdk_blob.so.11.0 00:04:45.010 LIB libspdk_bdev.a 00:04:45.010 SYMLINK libspdk_blob.so 00:04:45.010 SO libspdk_bdev.so.15.0 00:04:45.270 SYMLINK libspdk_bdev.so 00:04:45.270 CC lib/blobfs/tree.o 00:04:45.271 CC lib/blobfs/blobfs.o 00:04:45.271 CC lib/lvol/lvol.o 00:04:45.271 CC lib/nvmf/ctrlr.o 00:04:45.271 CC lib/nvmf/ctrlr_discovery.o 00:04:45.271 CC lib/nbd/nbd.o 00:04:45.271 CC lib/nbd/nbd_rpc.o 00:04:45.271 CC lib/scsi/dev.o 00:04:45.271 CC lib/ftl/ftl_core.o 00:04:45.271 CC lib/ublk/ublk.o 00:04:45.530 CC lib/nvmf/ctrlr_bdev.o 00:04:45.530 CC lib/nvmf/subsystem.o 00:04:45.530 CC lib/scsi/lun.o 00:04:45.789 LIB libspdk_nbd.a 00:04:45.789 CC lib/ftl/ftl_init.o 00:04:45.789 SO libspdk_nbd.so.7.0 00:04:46.049 SYMLINK libspdk_nbd.so 00:04:46.049 CC lib/ublk/ublk_rpc.o 00:04:46.049 CC lib/nvmf/nvmf.o 00:04:46.049 CC lib/scsi/port.o 00:04:46.049 CC lib/ftl/ftl_layout.o 00:04:46.049 CC lib/nvmf/nvmf_rpc.o 00:04:46.049 CC lib/scsi/scsi.o 00:04:46.309 LIB libspdk_ublk.a 00:04:46.309 SO libspdk_ublk.so.3.0 00:04:46.309 LIB libspdk_blobfs.a 00:04:46.309 SO libspdk_blobfs.so.10.0 00:04:46.309 CC lib/nvmf/transport.o 00:04:46.309 CC lib/scsi/scsi_bdev.o 00:04:46.309 SYMLINK libspdk_ublk.so 00:04:46.309 CC lib/nvmf/tcp.o 00:04:46.309 SYMLINK libspdk_blobfs.so 00:04:46.309 CC lib/nvmf/stubs.o 00:04:46.309 LIB libspdk_lvol.a 00:04:46.309 SO libspdk_lvol.so.10.0 00:04:46.569 CC lib/ftl/ftl_debug.o 00:04:46.569 SYMLINK libspdk_lvol.so 00:04:46.569 CC lib/nvmf/mdns_server.o 00:04:46.829 CC lib/ftl/ftl_io.o 00:04:46.829 CC lib/nvmf/rdma.o 00:04:46.829 CC lib/scsi/scsi_pr.o 00:04:47.088 CC lib/nvmf/auth.o 00:04:47.088 CC lib/ftl/ftl_sb.o 00:04:47.088 CC lib/ftl/ftl_l2p.o 00:04:47.088 CC lib/scsi/scsi_rpc.o 00:04:47.088 CC lib/scsi/task.o 00:04:47.088 CC lib/ftl/ftl_l2p_flat.o 00:04:47.088 CC lib/ftl/ftl_nv_cache.o 00:04:47.347 CC lib/ftl/ftl_band.o 00:04:47.347 CC lib/ftl/ftl_band_ops.o 00:04:47.347 CC lib/ftl/ftl_writer.o 00:04:47.347 CC lib/ftl/ftl_rq.o 00:04:47.347 LIB libspdk_scsi.a 00:04:47.347 SO libspdk_scsi.so.9.0 00:04:47.607 CC lib/ftl/ftl_reloc.o 00:04:47.607 CC lib/ftl/ftl_l2p_cache.o 00:04:47.607 SYMLINK libspdk_scsi.so 00:04:47.607 CC lib/ftl/ftl_p2l.o 00:04:47.607 CC lib/ftl/mngt/ftl_mngt.o 00:04:47.867 CC lib/iscsi/conn.o 00:04:47.867 CC lib/vhost/vhost.o 00:04:47.867 CC lib/iscsi/init_grp.o 00:04:47.867 CC lib/iscsi/iscsi.o 00:04:47.867 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:48.127 CC lib/iscsi/md5.o 00:04:48.127 CC lib/iscsi/param.o 00:04:48.127 CC lib/iscsi/portal_grp.o 00:04:48.127 CC lib/iscsi/tgt_node.o 00:04:48.387 CC lib/iscsi/iscsi_subsystem.o 00:04:48.387 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:48.387 CC lib/iscsi/iscsi_rpc.o 00:04:48.387 CC lib/iscsi/task.o 00:04:48.387 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:48.387 CC lib/vhost/vhost_rpc.o 00:04:48.387 CC lib/vhost/vhost_scsi.o 00:04:48.387 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:48.645 CC lib/vhost/vhost_blk.o 00:04:48.645 CC lib/vhost/rte_vhost_user.o 00:04:48.645 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:48.645 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:48.904 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:48.904 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:48.904 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:48.904 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:48.904 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:49.163 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:49.163 CC lib/ftl/utils/ftl_conf.o 00:04:49.163 CC lib/ftl/utils/ftl_md.o 00:04:49.163 CC lib/ftl/utils/ftl_mempool.o 00:04:49.421 CC lib/ftl/utils/ftl_bitmap.o 00:04:49.421 LIB libspdk_nvmf.a 00:04:49.421 CC lib/ftl/utils/ftl_property.o 00:04:49.421 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:49.421 LIB libspdk_iscsi.a 00:04:49.421 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:49.421 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:49.421 SO libspdk_nvmf.so.18.0 00:04:49.421 SO libspdk_iscsi.so.8.0 00:04:49.421 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:49.680 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:49.680 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:49.680 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:49.680 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:49.680 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:49.680 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:49.680 SYMLINK libspdk_iscsi.so 00:04:49.680 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:49.680 SYMLINK libspdk_nvmf.so 00:04:49.680 CC lib/ftl/base/ftl_base_dev.o 00:04:49.680 CC lib/ftl/base/ftl_base_bdev.o 00:04:49.680 LIB libspdk_vhost.a 00:04:49.680 CC lib/ftl/ftl_trace.o 00:04:49.938 SO libspdk_vhost.so.8.0 00:04:49.938 SYMLINK libspdk_vhost.so 00:04:49.938 LIB libspdk_ftl.a 00:04:50.196 SO libspdk_ftl.so.9.0 00:04:50.763 SYMLINK libspdk_ftl.so 00:04:51.022 CC module/env_dpdk/env_dpdk_rpc.o 00:04:51.022 CC module/sock/posix/posix.o 00:04:51.022 CC module/keyring/file/keyring.o 00:04:51.022 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:51.022 CC module/accel/ioat/accel_ioat.o 00:04:51.022 CC module/accel/error/accel_error.o 00:04:51.022 CC module/blob/bdev/blob_bdev.o 00:04:51.022 CC module/accel/dsa/accel_dsa.o 00:04:51.022 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:51.022 CC module/accel/iaa/accel_iaa.o 00:04:51.281 LIB libspdk_env_dpdk_rpc.a 00:04:51.281 SO libspdk_env_dpdk_rpc.so.6.0 00:04:51.281 CC module/keyring/file/keyring_rpc.o 00:04:51.281 LIB libspdk_scheduler_dpdk_governor.a 00:04:51.281 SYMLINK libspdk_env_dpdk_rpc.so 00:04:51.281 CC module/accel/error/accel_error_rpc.o 00:04:51.281 CC module/accel/iaa/accel_iaa_rpc.o 00:04:51.281 CC module/accel/ioat/accel_ioat_rpc.o 00:04:51.281 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:51.281 LIB libspdk_scheduler_dynamic.a 00:04:51.281 SO libspdk_scheduler_dynamic.so.4.0 00:04:51.281 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:51.281 LIB libspdk_blob_bdev.a 00:04:51.281 CC module/accel/dsa/accel_dsa_rpc.o 00:04:51.541 SYMLINK libspdk_scheduler_dynamic.so 00:04:51.541 LIB libspdk_keyring_file.a 00:04:51.541 SO libspdk_blob_bdev.so.11.0 00:04:51.541 SO libspdk_keyring_file.so.1.0 00:04:51.541 LIB libspdk_accel_error.a 00:04:51.541 LIB libspdk_accel_iaa.a 00:04:51.541 LIB libspdk_accel_ioat.a 00:04:51.541 SO libspdk_accel_error.so.2.0 00:04:51.541 SO libspdk_accel_iaa.so.3.0 00:04:51.541 SYMLINK libspdk_blob_bdev.so 00:04:51.541 SYMLINK libspdk_keyring_file.so 00:04:51.541 CC module/keyring/linux/keyring.o 00:04:51.541 SO libspdk_accel_ioat.so.6.0 00:04:51.541 CC module/keyring/linux/keyring_rpc.o 00:04:51.541 LIB libspdk_accel_dsa.a 00:04:51.541 SYMLINK libspdk_accel_iaa.so 00:04:51.541 SYMLINK libspdk_accel_ioat.so 00:04:51.541 SYMLINK libspdk_accel_error.so 00:04:51.541 SO libspdk_accel_dsa.so.5.0 00:04:51.541 CC module/scheduler/gscheduler/gscheduler.o 00:04:51.541 SYMLINK libspdk_accel_dsa.so 00:04:51.800 LIB libspdk_keyring_linux.a 00:04:51.800 SO libspdk_keyring_linux.so.1.0 00:04:51.800 LIB libspdk_scheduler_gscheduler.a 00:04:51.800 CC module/bdev/gpt/gpt.o 00:04:51.800 SO libspdk_scheduler_gscheduler.so.4.0 00:04:51.800 CC module/bdev/delay/vbdev_delay.o 00:04:51.800 CC module/blobfs/bdev/blobfs_bdev.o 00:04:51.800 CC module/bdev/error/vbdev_error.o 00:04:51.800 SYMLINK libspdk_keyring_linux.so 00:04:51.800 CC module/bdev/lvol/vbdev_lvol.o 00:04:51.800 CC module/bdev/error/vbdev_error_rpc.o 00:04:51.800 CC module/bdev/null/bdev_null.o 00:04:51.800 CC module/bdev/malloc/bdev_malloc.o 00:04:51.800 SYMLINK libspdk_scheduler_gscheduler.so 00:04:51.800 CC module/bdev/gpt/vbdev_gpt.o 00:04:52.060 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:52.060 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:52.060 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:52.060 LIB libspdk_sock_posix.a 00:04:52.060 SO libspdk_sock_posix.so.6.0 00:04:52.060 LIB libspdk_bdev_error.a 00:04:52.060 SO libspdk_bdev_error.so.6.0 00:04:52.060 CC module/bdev/null/bdev_null_rpc.o 00:04:52.060 SYMLINK libspdk_sock_posix.so 00:04:52.060 LIB libspdk_blobfs_bdev.a 00:04:52.060 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:52.060 LIB libspdk_bdev_gpt.a 00:04:52.318 SYMLINK libspdk_bdev_error.so 00:04:52.318 SO libspdk_blobfs_bdev.so.6.0 00:04:52.318 LIB libspdk_bdev_delay.a 00:04:52.318 SO libspdk_bdev_gpt.so.6.0 00:04:52.318 SO libspdk_bdev_delay.so.6.0 00:04:52.318 SYMLINK libspdk_blobfs_bdev.so 00:04:52.318 SYMLINK libspdk_bdev_gpt.so 00:04:52.318 SYMLINK libspdk_bdev_delay.so 00:04:52.318 CC module/bdev/nvme/bdev_nvme.o 00:04:52.318 LIB libspdk_bdev_malloc.a 00:04:52.318 CC module/bdev/passthru/vbdev_passthru.o 00:04:52.318 LIB libspdk_bdev_null.a 00:04:52.318 SO libspdk_bdev_malloc.so.6.0 00:04:52.318 SO libspdk_bdev_null.so.6.0 00:04:52.318 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:52.318 LIB libspdk_bdev_lvol.a 00:04:52.578 SO libspdk_bdev_lvol.so.6.0 00:04:52.578 CC module/bdev/raid/bdev_raid.o 00:04:52.578 CC module/bdev/split/vbdev_split.o 00:04:52.578 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:52.578 SYMLINK libspdk_bdev_malloc.so 00:04:52.578 CC module/bdev/xnvme/bdev_xnvme.o 00:04:52.578 CC module/bdev/split/vbdev_split_rpc.o 00:04:52.578 SYMLINK libspdk_bdev_null.so 00:04:52.578 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:04:52.578 SYMLINK libspdk_bdev_lvol.so 00:04:52.578 CC module/bdev/raid/bdev_raid_rpc.o 00:04:52.578 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:52.578 CC module/bdev/raid/bdev_raid_sb.o 00:04:52.578 CC module/bdev/nvme/nvme_rpc.o 00:04:52.838 LIB libspdk_bdev_passthru.a 00:04:52.838 LIB libspdk_bdev_split.a 00:04:52.838 SO libspdk_bdev_passthru.so.6.0 00:04:52.838 LIB libspdk_bdev_xnvme.a 00:04:52.838 SO libspdk_bdev_split.so.6.0 00:04:52.838 CC module/bdev/nvme/bdev_mdns_client.o 00:04:52.838 SO libspdk_bdev_xnvme.so.3.0 00:04:52.838 SYMLINK libspdk_bdev_split.so 00:04:52.838 SYMLINK libspdk_bdev_passthru.so 00:04:52.838 CC module/bdev/nvme/vbdev_opal.o 00:04:52.838 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:52.838 CC module/bdev/raid/raid0.o 00:04:52.838 SYMLINK libspdk_bdev_xnvme.so 00:04:52.838 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:52.838 CC module/bdev/raid/raid1.o 00:04:52.838 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:53.098 LIB libspdk_bdev_zone_block.a 00:04:53.098 CC module/bdev/aio/bdev_aio.o 00:04:53.098 SO libspdk_bdev_zone_block.so.6.0 00:04:53.098 CC module/bdev/aio/bdev_aio_rpc.o 00:04:53.098 SYMLINK libspdk_bdev_zone_block.so 00:04:53.098 CC module/bdev/raid/concat.o 00:04:53.357 CC module/bdev/ftl/bdev_ftl.o 00:04:53.357 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:53.357 CC module/bdev/iscsi/bdev_iscsi.o 00:04:53.357 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:53.357 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:53.357 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:53.357 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:53.357 LIB libspdk_bdev_aio.a 00:04:53.357 SO libspdk_bdev_aio.so.6.0 00:04:53.616 SYMLINK libspdk_bdev_aio.so 00:04:53.616 LIB libspdk_bdev_raid.a 00:04:53.616 LIB libspdk_bdev_ftl.a 00:04:53.616 SO libspdk_bdev_ftl.so.6.0 00:04:53.616 SO libspdk_bdev_raid.so.6.0 00:04:53.874 LIB libspdk_bdev_iscsi.a 00:04:53.874 SO libspdk_bdev_iscsi.so.6.0 00:04:53.874 SYMLINK libspdk_bdev_ftl.so 00:04:53.874 SYMLINK libspdk_bdev_raid.so 00:04:53.874 SYMLINK libspdk_bdev_iscsi.so 00:04:53.874 LIB libspdk_bdev_virtio.a 00:04:54.132 SO libspdk_bdev_virtio.so.6.0 00:04:54.132 SYMLINK libspdk_bdev_virtio.so 00:04:55.068 LIB libspdk_bdev_nvme.a 00:04:55.068 SO libspdk_bdev_nvme.so.7.0 00:04:55.069 SYMLINK libspdk_bdev_nvme.so 00:04:55.636 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:55.636 CC module/event/subsystems/scheduler/scheduler.o 00:04:55.636 CC module/event/subsystems/vmd/vmd.o 00:04:55.636 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:55.636 CC module/event/subsystems/sock/sock.o 00:04:55.636 CC module/event/subsystems/keyring/keyring.o 00:04:55.636 CC module/event/subsystems/iobuf/iobuf.o 00:04:55.636 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:55.896 LIB libspdk_event_vhost_blk.a 00:04:55.896 LIB libspdk_event_sock.a 00:04:55.896 LIB libspdk_event_keyring.a 00:04:55.896 LIB libspdk_event_scheduler.a 00:04:55.896 LIB libspdk_event_vmd.a 00:04:55.896 SO libspdk_event_vhost_blk.so.3.0 00:04:55.896 LIB libspdk_event_iobuf.a 00:04:55.896 SO libspdk_event_sock.so.5.0 00:04:55.896 SO libspdk_event_keyring.so.1.0 00:04:55.896 SO libspdk_event_scheduler.so.4.0 00:04:55.896 SO libspdk_event_vmd.so.6.0 00:04:55.896 SO libspdk_event_iobuf.so.3.0 00:04:55.896 SYMLINK libspdk_event_vhost_blk.so 00:04:55.896 SYMLINK libspdk_event_sock.so 00:04:55.896 SYMLINK libspdk_event_keyring.so 00:04:55.896 SYMLINK libspdk_event_scheduler.so 00:04:55.896 SYMLINK libspdk_event_vmd.so 00:04:55.896 SYMLINK libspdk_event_iobuf.so 00:04:56.464 CC module/event/subsystems/accel/accel.o 00:04:56.464 LIB libspdk_event_accel.a 00:04:56.723 SO libspdk_event_accel.so.6.0 00:04:56.723 SYMLINK libspdk_event_accel.so 00:04:56.982 CC module/event/subsystems/bdev/bdev.o 00:04:57.241 LIB libspdk_event_bdev.a 00:04:57.241 SO libspdk_event_bdev.so.6.0 00:04:57.501 SYMLINK libspdk_event_bdev.so 00:04:57.758 CC module/event/subsystems/scsi/scsi.o 00:04:57.758 CC module/event/subsystems/nbd/nbd.o 00:04:57.759 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:57.759 CC module/event/subsystems/ublk/ublk.o 00:04:57.759 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:58.016 LIB libspdk_event_scsi.a 00:04:58.016 LIB libspdk_event_ublk.a 00:04:58.016 LIB libspdk_event_nbd.a 00:04:58.016 SO libspdk_event_scsi.so.6.0 00:04:58.016 SO libspdk_event_nbd.so.6.0 00:04:58.016 SO libspdk_event_ublk.so.3.0 00:04:58.016 LIB libspdk_event_nvmf.a 00:04:58.016 SYMLINK libspdk_event_nbd.so 00:04:58.016 SYMLINK libspdk_event_scsi.so 00:04:58.016 SYMLINK libspdk_event_ublk.so 00:04:58.016 SO libspdk_event_nvmf.so.6.0 00:04:58.016 SYMLINK libspdk_event_nvmf.so 00:04:58.276 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:58.276 CC module/event/subsystems/iscsi/iscsi.o 00:04:58.535 LIB libspdk_event_vhost_scsi.a 00:04:58.535 SO libspdk_event_vhost_scsi.so.3.0 00:04:58.535 LIB libspdk_event_iscsi.a 00:04:58.535 SYMLINK libspdk_event_vhost_scsi.so 00:04:58.535 SO libspdk_event_iscsi.so.6.0 00:04:58.793 SYMLINK libspdk_event_iscsi.so 00:04:59.052 SO libspdk.so.6.0 00:04:59.052 SYMLINK libspdk.so 00:04:59.312 CC app/trace_record/trace_record.o 00:04:59.312 CC app/spdk_lspci/spdk_lspci.o 00:04:59.312 CC app/spdk_nvme_identify/identify.o 00:04:59.312 CC app/spdk_nvme_perf/perf.o 00:04:59.312 CXX app/trace/trace.o 00:04:59.312 CC app/iscsi_tgt/iscsi_tgt.o 00:04:59.312 CC app/nvmf_tgt/nvmf_main.o 00:04:59.312 CC app/spdk_tgt/spdk_tgt.o 00:04:59.312 CC examples/accel/perf/accel_perf.o 00:04:59.312 LINK spdk_lspci 00:04:59.312 CC test/accel/dif/dif.o 00:04:59.312 LINK nvmf_tgt 00:04:59.571 LINK iscsi_tgt 00:04:59.571 LINK spdk_trace_record 00:04:59.571 LINK spdk_tgt 00:04:59.571 CC app/spdk_nvme_discover/discovery_aer.o 00:04:59.571 LINK spdk_trace 00:04:59.829 CC app/spdk_top/spdk_top.o 00:04:59.829 LINK spdk_nvme_discover 00:04:59.829 LINK accel_perf 00:04:59.829 CC app/vhost/vhost.o 00:04:59.829 LINK dif 00:04:59.829 CC app/spdk_dd/spdk_dd.o 00:04:59.829 CC test/app/bdev_svc/bdev_svc.o 00:05:00.088 CC app/fio/nvme/fio_plugin.o 00:05:00.088 LINK vhost 00:05:00.088 LINK bdev_svc 00:05:00.088 LINK spdk_nvme_perf 00:05:00.088 LINK spdk_nvme_identify 00:05:00.088 CC test/bdev/bdevio/bdevio.o 00:05:00.088 CC examples/bdev/hello_world/hello_bdev.o 00:05:00.088 CC test/blobfs/mkfs/mkfs.o 00:05:00.347 LINK spdk_dd 00:05:00.347 TEST_HEADER include/spdk/accel.h 00:05:00.347 TEST_HEADER include/spdk/accel_module.h 00:05:00.347 TEST_HEADER include/spdk/assert.h 00:05:00.347 TEST_HEADER include/spdk/barrier.h 00:05:00.347 TEST_HEADER include/spdk/base64.h 00:05:00.347 TEST_HEADER include/spdk/bdev.h 00:05:00.347 TEST_HEADER include/spdk/bdev_module.h 00:05:00.347 TEST_HEADER include/spdk/bdev_zone.h 00:05:00.347 TEST_HEADER include/spdk/bit_array.h 00:05:00.347 TEST_HEADER include/spdk/bit_pool.h 00:05:00.347 TEST_HEADER include/spdk/blob_bdev.h 00:05:00.347 TEST_HEADER include/spdk/blobfs_bdev.h 00:05:00.347 TEST_HEADER include/spdk/blobfs.h 00:05:00.347 TEST_HEADER include/spdk/blob.h 00:05:00.347 TEST_HEADER include/spdk/conf.h 00:05:00.347 TEST_HEADER include/spdk/config.h 00:05:00.347 TEST_HEADER include/spdk/cpuset.h 00:05:00.347 TEST_HEADER include/spdk/crc16.h 00:05:00.347 TEST_HEADER include/spdk/crc32.h 00:05:00.347 TEST_HEADER include/spdk/crc64.h 00:05:00.347 TEST_HEADER include/spdk/dif.h 00:05:00.347 TEST_HEADER include/spdk/dma.h 00:05:00.347 TEST_HEADER include/spdk/endian.h 00:05:00.347 TEST_HEADER include/spdk/env_dpdk.h 00:05:00.347 TEST_HEADER include/spdk/env.h 00:05:00.347 TEST_HEADER include/spdk/event.h 00:05:00.347 TEST_HEADER include/spdk/fd_group.h 00:05:00.347 TEST_HEADER include/spdk/fd.h 00:05:00.347 TEST_HEADER include/spdk/file.h 00:05:00.347 CC test/app/histogram_perf/histogram_perf.o 00:05:00.347 TEST_HEADER include/spdk/ftl.h 00:05:00.347 TEST_HEADER include/spdk/gpt_spec.h 00:05:00.347 TEST_HEADER include/spdk/hexlify.h 00:05:00.347 TEST_HEADER include/spdk/histogram_data.h 00:05:00.347 TEST_HEADER include/spdk/idxd.h 00:05:00.347 TEST_HEADER include/spdk/idxd_spec.h 00:05:00.347 TEST_HEADER include/spdk/init.h 00:05:00.347 TEST_HEADER include/spdk/ioat.h 00:05:00.347 LINK hello_bdev 00:05:00.347 TEST_HEADER include/spdk/ioat_spec.h 00:05:00.347 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:05:00.347 TEST_HEADER include/spdk/iscsi_spec.h 00:05:00.347 TEST_HEADER include/spdk/json.h 00:05:00.347 TEST_HEADER include/spdk/jsonrpc.h 00:05:00.347 TEST_HEADER include/spdk/keyring.h 00:05:00.347 TEST_HEADER include/spdk/keyring_module.h 00:05:00.347 CC app/fio/bdev/fio_plugin.o 00:05:00.347 TEST_HEADER include/spdk/likely.h 00:05:00.347 TEST_HEADER include/spdk/log.h 00:05:00.347 LINK mkfs 00:05:00.347 TEST_HEADER include/spdk/lvol.h 00:05:00.347 TEST_HEADER include/spdk/memory.h 00:05:00.347 TEST_HEADER include/spdk/mmio.h 00:05:00.347 TEST_HEADER include/spdk/nbd.h 00:05:00.347 TEST_HEADER include/spdk/notify.h 00:05:00.347 TEST_HEADER include/spdk/nvme.h 00:05:00.347 TEST_HEADER include/spdk/nvme_intel.h 00:05:00.347 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:00.347 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:00.347 TEST_HEADER include/spdk/nvme_spec.h 00:05:00.347 TEST_HEADER include/spdk/nvme_zns.h 00:05:00.347 TEST_HEADER include/spdk/nvmf_cmd.h 00:05:00.347 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:05:00.347 TEST_HEADER include/spdk/nvmf.h 00:05:00.347 TEST_HEADER include/spdk/nvmf_spec.h 00:05:00.347 TEST_HEADER include/spdk/nvmf_transport.h 00:05:00.347 TEST_HEADER include/spdk/opal.h 00:05:00.347 TEST_HEADER include/spdk/opal_spec.h 00:05:00.347 TEST_HEADER include/spdk/pci_ids.h 00:05:00.347 TEST_HEADER include/spdk/pipe.h 00:05:00.347 TEST_HEADER include/spdk/queue.h 00:05:00.347 TEST_HEADER include/spdk/reduce.h 00:05:00.347 TEST_HEADER include/spdk/rpc.h 00:05:00.606 LINK histogram_perf 00:05:00.606 TEST_HEADER include/spdk/scheduler.h 00:05:00.606 TEST_HEADER include/spdk/scsi.h 00:05:00.606 TEST_HEADER include/spdk/scsi_spec.h 00:05:00.606 TEST_HEADER include/spdk/sock.h 00:05:00.606 TEST_HEADER include/spdk/stdinc.h 00:05:00.606 TEST_HEADER include/spdk/string.h 00:05:00.606 TEST_HEADER include/spdk/thread.h 00:05:00.606 TEST_HEADER include/spdk/trace.h 00:05:00.606 TEST_HEADER include/spdk/trace_parser.h 00:05:00.606 TEST_HEADER include/spdk/tree.h 00:05:00.606 TEST_HEADER include/spdk/ublk.h 00:05:00.606 TEST_HEADER include/spdk/util.h 00:05:00.606 TEST_HEADER include/spdk/uuid.h 00:05:00.606 TEST_HEADER include/spdk/version.h 00:05:00.606 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:00.606 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:00.606 TEST_HEADER include/spdk/vhost.h 00:05:00.606 TEST_HEADER include/spdk/vmd.h 00:05:00.606 TEST_HEADER include/spdk/xor.h 00:05:00.606 TEST_HEADER include/spdk/zipf.h 00:05:00.606 CXX test/cpp_headers/accel.o 00:05:00.606 LINK bdevio 00:05:00.606 LINK spdk_nvme 00:05:00.606 CC test/dma/test_dma/test_dma.o 00:05:00.606 LINK spdk_top 00:05:00.606 CC examples/bdev/bdevperf/bdevperf.o 00:05:00.606 CXX test/cpp_headers/accel_module.o 00:05:00.866 CXX test/cpp_headers/assert.o 00:05:00.866 LINK nvme_fuzz 00:05:00.866 CC examples/ioat/perf/perf.o 00:05:00.866 LINK spdk_bdev 00:05:00.866 CC test/env/mem_callbacks/mem_callbacks.o 00:05:00.866 CC examples/blob/hello_world/hello_blob.o 00:05:00.866 CXX test/cpp_headers/barrier.o 00:05:00.866 CC examples/ioat/verify/verify.o 00:05:00.866 CC examples/blob/cli/blobcli.o 00:05:01.126 LINK test_dma 00:05:01.126 CXX test/cpp_headers/base64.o 00:05:01.126 LINK mem_callbacks 00:05:01.126 CC test/app/jsoncat/jsoncat.o 00:05:01.126 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:01.126 LINK ioat_perf 00:05:01.126 LINK hello_blob 00:05:01.126 LINK verify 00:05:01.126 LINK jsoncat 00:05:01.126 CXX test/cpp_headers/bdev.o 00:05:01.126 CC test/env/vtophys/vtophys.o 00:05:01.385 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:01.385 CXX test/cpp_headers/bdev_module.o 00:05:01.385 LINK vtophys 00:05:01.385 CC examples/nvme/hello_world/hello_world.o 00:05:01.385 LINK blobcli 00:05:01.385 CC examples/sock/hello_world/hello_sock.o 00:05:01.385 LINK bdevperf 00:05:01.385 LINK env_dpdk_post_init 00:05:01.385 CC examples/vmd/lsvmd/lsvmd.o 00:05:01.644 CXX test/cpp_headers/bdev_zone.o 00:05:01.644 CC examples/nvmf/nvmf/nvmf.o 00:05:01.644 LINK lsvmd 00:05:01.644 CC examples/vmd/led/led.o 00:05:01.644 LINK hello_world 00:05:01.644 CXX test/cpp_headers/bit_array.o 00:05:01.644 CC test/env/memory/memory_ut.o 00:05:01.644 LINK hello_sock 00:05:01.644 CC examples/nvme/reconnect/reconnect.o 00:05:01.644 LINK led 00:05:01.644 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:01.905 CXX test/cpp_headers/bit_pool.o 00:05:01.905 CC examples/nvme/arbitration/arbitration.o 00:05:01.905 LINK nvmf 00:05:01.905 CXX test/cpp_headers/blob_bdev.o 00:05:01.905 CC examples/nvme/hotplug/hotplug.o 00:05:01.905 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:05:01.905 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:01.905 CXX test/cpp_headers/blobfs_bdev.o 00:05:02.166 LINK reconnect 00:05:02.166 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:05:02.166 LINK hotplug 00:05:02.166 LINK arbitration 00:05:02.166 LINK cmb_copy 00:05:02.166 CXX test/cpp_headers/blobfs.o 00:05:02.166 CC test/event/event_perf/event_perf.o 00:05:02.166 CXX test/cpp_headers/blob.o 00:05:02.166 CXX test/cpp_headers/conf.o 00:05:02.425 LINK nvme_manage 00:05:02.425 LINK event_perf 00:05:02.425 LINK memory_ut 00:05:02.425 CXX test/cpp_headers/config.o 00:05:02.425 CXX test/cpp_headers/cpuset.o 00:05:02.425 CC examples/util/zipf/zipf.o 00:05:02.425 LINK vhost_fuzz 00:05:02.425 CC examples/thread/thread/thread_ex.o 00:05:02.684 CC examples/nvme/abort/abort.o 00:05:02.684 CC test/nvme/aer/aer.o 00:05:02.684 CC test/lvol/esnap/esnap.o 00:05:02.684 CC test/event/reactor/reactor.o 00:05:02.684 CXX test/cpp_headers/crc16.o 00:05:02.684 LINK zipf 00:05:02.684 CC test/env/pci/pci_ut.o 00:05:02.684 LINK reactor 00:05:02.684 CC test/nvme/reset/reset.o 00:05:02.684 CXX test/cpp_headers/crc32.o 00:05:02.684 LINK thread 00:05:02.684 LINK iscsi_fuzz 00:05:02.943 LINK aer 00:05:02.943 CC test/nvme/sgl/sgl.o 00:05:02.943 CXX test/cpp_headers/crc64.o 00:05:02.943 CC test/event/reactor_perf/reactor_perf.o 00:05:02.943 LINK abort 00:05:02.943 LINK reset 00:05:02.943 CXX test/cpp_headers/dif.o 00:05:02.943 CC test/nvme/e2edp/nvme_dp.o 00:05:03.202 LINK reactor_perf 00:05:03.202 LINK pci_ut 00:05:03.202 CC test/rpc_client/rpc_client_test.o 00:05:03.202 CC test/app/stub/stub.o 00:05:03.202 LINK sgl 00:05:03.202 CXX test/cpp_headers/dma.o 00:05:03.202 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:03.202 CC test/nvme/overhead/overhead.o 00:05:03.202 CC test/event/app_repeat/app_repeat.o 00:05:03.460 LINK rpc_client_test 00:05:03.460 LINK stub 00:05:03.460 LINK nvme_dp 00:05:03.460 CXX test/cpp_headers/endian.o 00:05:03.460 LINK pmr_persistence 00:05:03.460 LINK app_repeat 00:05:03.460 CC test/nvme/err_injection/err_injection.o 00:05:03.460 CXX test/cpp_headers/env_dpdk.o 00:05:03.460 LINK overhead 00:05:03.460 CC test/thread/poller_perf/poller_perf.o 00:05:03.460 CC test/nvme/startup/startup.o 00:05:03.460 CXX test/cpp_headers/env.o 00:05:03.718 CXX test/cpp_headers/event.o 00:05:03.718 LINK err_injection 00:05:03.718 LINK poller_perf 00:05:03.718 CC examples/idxd/perf/perf.o 00:05:03.718 CC examples/interrupt_tgt/interrupt_tgt.o 00:05:03.718 LINK startup 00:05:03.718 CXX test/cpp_headers/fd_group.o 00:05:03.718 CC test/event/scheduler/scheduler.o 00:05:03.718 CC test/nvme/reserve/reserve.o 00:05:03.977 LINK interrupt_tgt 00:05:03.977 CC test/nvme/simple_copy/simple_copy.o 00:05:03.977 CC test/nvme/connect_stress/connect_stress.o 00:05:03.977 CXX test/cpp_headers/fd.o 00:05:03.977 CC test/nvme/boot_partition/boot_partition.o 00:05:03.977 CC test/nvme/compliance/nvme_compliance.o 00:05:03.977 LINK scheduler 00:05:03.977 LINK reserve 00:05:03.977 CXX test/cpp_headers/file.o 00:05:03.977 LINK idxd_perf 00:05:04.235 LINK connect_stress 00:05:04.235 LINK boot_partition 00:05:04.235 CC test/nvme/fused_ordering/fused_ordering.o 00:05:04.235 LINK simple_copy 00:05:04.235 CXX test/cpp_headers/ftl.o 00:05:04.235 CXX test/cpp_headers/gpt_spec.o 00:05:04.235 CXX test/cpp_headers/hexlify.o 00:05:04.235 CXX test/cpp_headers/histogram_data.o 00:05:04.235 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:04.235 LINK nvme_compliance 00:05:04.235 LINK fused_ordering 00:05:04.493 CXX test/cpp_headers/idxd.o 00:05:04.493 CC test/nvme/fdp/fdp.o 00:05:04.493 CC test/nvme/cuse/cuse.o 00:05:04.493 CXX test/cpp_headers/idxd_spec.o 00:05:04.493 CXX test/cpp_headers/init.o 00:05:04.493 CXX test/cpp_headers/ioat.o 00:05:04.493 LINK doorbell_aers 00:05:04.493 CXX test/cpp_headers/ioat_spec.o 00:05:04.493 CXX test/cpp_headers/iscsi_spec.o 00:05:04.493 CXX test/cpp_headers/json.o 00:05:04.493 CXX test/cpp_headers/jsonrpc.o 00:05:04.493 CXX test/cpp_headers/keyring.o 00:05:04.493 CXX test/cpp_headers/keyring_module.o 00:05:04.493 CXX test/cpp_headers/likely.o 00:05:04.750 CXX test/cpp_headers/log.o 00:05:04.750 CXX test/cpp_headers/lvol.o 00:05:04.750 CXX test/cpp_headers/memory.o 00:05:04.750 CXX test/cpp_headers/mmio.o 00:05:04.750 CXX test/cpp_headers/nbd.o 00:05:04.750 CXX test/cpp_headers/notify.o 00:05:04.750 CXX test/cpp_headers/nvme.o 00:05:04.750 CXX test/cpp_headers/nvme_intel.o 00:05:04.750 LINK fdp 00:05:04.750 CXX test/cpp_headers/nvme_ocssd.o 00:05:04.750 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:04.750 CXX test/cpp_headers/nvme_spec.o 00:05:04.750 CXX test/cpp_headers/nvme_zns.o 00:05:04.750 CXX test/cpp_headers/nvmf_cmd.o 00:05:04.750 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:05.008 CXX test/cpp_headers/nvmf.o 00:05:05.008 CXX test/cpp_headers/nvmf_spec.o 00:05:05.008 CXX test/cpp_headers/nvmf_transport.o 00:05:05.008 CXX test/cpp_headers/opal.o 00:05:05.008 CXX test/cpp_headers/opal_spec.o 00:05:05.008 CXX test/cpp_headers/pci_ids.o 00:05:05.008 CXX test/cpp_headers/pipe.o 00:05:05.008 CXX test/cpp_headers/queue.o 00:05:05.008 CXX test/cpp_headers/reduce.o 00:05:05.008 CXX test/cpp_headers/rpc.o 00:05:05.008 CXX test/cpp_headers/scheduler.o 00:05:05.008 CXX test/cpp_headers/scsi.o 00:05:05.008 CXX test/cpp_headers/scsi_spec.o 00:05:05.008 CXX test/cpp_headers/sock.o 00:05:05.008 CXX test/cpp_headers/stdinc.o 00:05:05.267 CXX test/cpp_headers/string.o 00:05:05.267 CXX test/cpp_headers/thread.o 00:05:05.267 CXX test/cpp_headers/trace.o 00:05:05.267 CXX test/cpp_headers/trace_parser.o 00:05:05.267 CXX test/cpp_headers/tree.o 00:05:05.267 CXX test/cpp_headers/ublk.o 00:05:05.267 CXX test/cpp_headers/util.o 00:05:05.267 CXX test/cpp_headers/uuid.o 00:05:05.267 CXX test/cpp_headers/version.o 00:05:05.267 CXX test/cpp_headers/vfio_user_pci.o 00:05:05.267 CXX test/cpp_headers/vfio_user_spec.o 00:05:05.267 CXX test/cpp_headers/vhost.o 00:05:05.267 CXX test/cpp_headers/vmd.o 00:05:05.526 CXX test/cpp_headers/xor.o 00:05:05.526 CXX test/cpp_headers/zipf.o 00:05:05.526 LINK cuse 00:05:08.071 LINK esnap 00:05:08.638 00:05:08.638 real 1m1.379s 00:05:08.638 user 5m14.385s 00:05:08.638 sys 1m15.452s 00:05:08.638 18:23:08 make -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:05:08.638 18:23:08 make -- common/autotest_common.sh@10 -- $ set +x 00:05:08.638 ************************************ 00:05:08.638 END TEST make 00:05:08.638 ************************************ 00:05:08.638 18:23:08 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:08.638 18:23:08 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:08.638 18:23:08 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:08.638 18:23:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:08.638 18:23:08 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:05:08.638 18:23:08 -- pm/common@44 -- $ pid=6147 00:05:08.638 18:23:08 -- pm/common@50 -- $ kill -TERM 6147 00:05:08.638 18:23:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:08.638 18:23:08 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:05:08.638 18:23:08 -- pm/common@44 -- $ pid=6148 00:05:08.638 18:23:08 -- pm/common@50 -- $ kill -TERM 6148 00:05:08.897 18:23:08 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:08.897 18:23:08 -- nvmf/common.sh@7 -- # uname -s 00:05:08.897 18:23:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:08.897 18:23:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:08.897 18:23:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:08.897 18:23:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:08.897 18:23:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:08.897 18:23:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:08.897 18:23:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:08.897 18:23:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:08.897 18:23:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:08.897 18:23:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:08.897 18:23:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a665da6e-2bb8-44e2-a38e-0b9ae5ea0de5 00:05:08.897 18:23:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=a665da6e-2bb8-44e2-a38e-0b9ae5ea0de5 00:05:08.897 18:23:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:08.897 18:23:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:08.897 18:23:08 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:08.897 18:23:08 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:08.898 18:23:08 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:08.898 18:23:08 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:08.898 18:23:08 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:08.898 18:23:08 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:08.898 18:23:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:08.898 18:23:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:08.898 18:23:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:08.898 18:23:08 -- paths/export.sh@5 -- # export PATH 00:05:08.898 18:23:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:08.898 18:23:08 -- nvmf/common.sh@47 -- # : 0 00:05:08.898 18:23:08 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:08.898 18:23:08 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:08.898 18:23:08 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:08.898 18:23:08 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:08.898 18:23:08 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:08.898 18:23:08 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:08.898 18:23:08 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:08.898 18:23:08 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:08.898 18:23:08 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:08.898 18:23:08 -- spdk/autotest.sh@32 -- # uname -s 00:05:08.898 18:23:08 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:08.898 18:23:08 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:08.898 18:23:08 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:08.898 18:23:08 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:05:08.898 18:23:08 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:08.898 18:23:08 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:08.898 18:23:08 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:08.898 18:23:08 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:08.898 18:23:08 -- spdk/autotest.sh@48 -- # udevadm_pid=65989 00:05:08.898 18:23:08 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:08.898 18:23:08 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:08.898 18:23:08 -- pm/common@17 -- # local monitor 00:05:08.898 18:23:08 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:08.898 18:23:08 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:08.898 18:23:08 -- pm/common@25 -- # sleep 1 00:05:08.898 18:23:08 -- pm/common@21 -- # date +%s 00:05:08.898 18:23:08 -- pm/common@21 -- # date +%s 00:05:08.898 18:23:08 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721758988 00:05:08.898 18:23:08 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721758988 00:05:08.898 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721758988_collect-vmstat.pm.log 00:05:08.898 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721758988_collect-cpu-load.pm.log 00:05:09.836 18:23:09 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:09.836 18:23:09 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:09.836 18:23:09 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:09.836 18:23:09 -- common/autotest_common.sh@10 -- # set +x 00:05:09.836 18:23:09 -- spdk/autotest.sh@59 -- # create_test_list 00:05:09.836 18:23:09 -- common/autotest_common.sh@744 -- # xtrace_disable 00:05:09.836 18:23:09 -- common/autotest_common.sh@10 -- # set +x 00:05:10.096 18:23:09 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:05:10.096 18:23:09 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:05:10.096 18:23:09 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:05:10.096 18:23:09 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:05:10.096 18:23:09 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:05:10.096 18:23:09 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:10.096 18:23:09 -- common/autotest_common.sh@1451 -- # uname 00:05:10.096 18:23:09 -- common/autotest_common.sh@1451 -- # '[' Linux = FreeBSD ']' 00:05:10.096 18:23:09 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:10.096 18:23:09 -- common/autotest_common.sh@1471 -- # uname 00:05:10.096 18:23:09 -- common/autotest_common.sh@1471 -- # [[ Linux = FreeBSD ]] 00:05:10.096 18:23:09 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:05:10.096 18:23:09 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:05:10.096 18:23:09 -- spdk/autotest.sh@72 -- # hash lcov 00:05:10.096 18:23:09 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:05:10.096 18:23:09 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:05:10.096 --rc lcov_branch_coverage=1 00:05:10.096 --rc lcov_function_coverage=1 00:05:10.096 --rc genhtml_branch_coverage=1 00:05:10.096 --rc genhtml_function_coverage=1 00:05:10.096 --rc genhtml_legend=1 00:05:10.096 --rc geninfo_all_blocks=1 00:05:10.096 ' 00:05:10.096 18:23:09 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:05:10.096 --rc lcov_branch_coverage=1 00:05:10.096 --rc lcov_function_coverage=1 00:05:10.096 --rc genhtml_branch_coverage=1 00:05:10.096 --rc genhtml_function_coverage=1 00:05:10.096 --rc genhtml_legend=1 00:05:10.096 --rc geninfo_all_blocks=1 00:05:10.096 ' 00:05:10.096 18:23:09 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:05:10.096 --rc lcov_branch_coverage=1 00:05:10.096 --rc lcov_function_coverage=1 00:05:10.096 --rc genhtml_branch_coverage=1 00:05:10.096 --rc genhtml_function_coverage=1 00:05:10.096 --rc genhtml_legend=1 00:05:10.096 --rc geninfo_all_blocks=1 00:05:10.096 --no-external' 00:05:10.096 18:23:09 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:05:10.096 --rc lcov_branch_coverage=1 00:05:10.096 --rc lcov_function_coverage=1 00:05:10.096 --rc genhtml_branch_coverage=1 00:05:10.096 --rc genhtml_function_coverage=1 00:05:10.096 --rc genhtml_legend=1 00:05:10.096 --rc geninfo_all_blocks=1 00:05:10.096 --no-external' 00:05:10.096 18:23:09 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:05:10.096 lcov: LCOV version 1.14 00:05:10.096 18:23:10 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:25.048 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:25.048 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:35.033 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:05:35.033 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:05:35.033 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:05:35.033 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:05:35.033 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:05:35.033 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:05:35.033 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:05:35.033 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:05:35.033 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:05:35.033 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:05:35.033 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:05:35.033 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:05:35.033 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:05:35.033 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:05:35.033 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:05:35.033 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:05:35.033 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:05:35.033 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:05:35.033 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:05:35.033 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:05:35.033 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:05:35.033 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:05:35.033 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:05:35.033 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:05:35.033 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:05:35.033 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:05:35.033 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:05:35.033 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:05:35.033 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:05:35.033 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:05:35.033 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:05:35.033 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:05:35.033 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:05:35.033 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:05:35.033 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:05:35.033 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:05:35.033 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:05:35.033 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:05:35.033 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:05:35.033 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:05:35.033 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:05:35.033 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:05:35.033 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:05:35.033 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:05:35.033 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:05:35.033 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:05:35.033 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:05:35.033 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:05:35.033 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:05:35.033 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:05:35.033 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:05:35.033 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:05:35.033 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:05:35.033 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:05:35.033 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:05:35.033 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:05:35.033 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:05:35.033 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:05:35.033 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:05:35.033 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:05:35.033 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:05:35.033 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:05:35.033 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:05:35.033 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:05:35.033 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:05:35.033 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:05:35.033 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:05:35.033 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:05:35.033 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:05:35.033 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:05:35.033 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:05:35.033 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:05:35.033 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:05:35.033 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:05:35.033 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:05:35.033 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:05:35.033 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:05:35.033 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:05:35.033 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:05:35.033 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:05:35.033 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:05:35.033 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:05:35.033 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:05:35.033 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:05:35.033 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:05:35.033 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:05:35.033 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:05:35.033 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:05:35.033 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:05:35.033 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:05:35.033 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:05:35.033 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:05:35.033 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:05:35.033 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:05:35.034 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:05:35.034 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:05:35.034 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:05:35.034 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:05:35.034 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:05:35.034 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:05:35.034 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:05:35.034 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:05:35.034 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:05:35.034 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:05:35.034 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:05:35.034 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:05:35.034 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:05:35.034 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:05:35.034 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:05:35.034 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:05:35.034 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:05:35.034 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:05:35.034 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:05:35.034 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:05:35.034 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:05:35.034 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:05:35.034 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:05:35.034 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:05:35.034 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:05:35.034 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:05:35.034 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:05:35.034 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:05:35.034 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:05:35.034 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:05:35.034 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:05:35.034 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:05:35.034 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:05:35.034 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:05:35.034 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:05:35.034 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:05:35.034 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:05:35.034 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:05:35.034 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:05:35.034 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:05:35.034 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:05:35.034 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:05:35.034 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:05:35.034 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:05:35.034 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:05:35.034 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:05:35.034 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:05:35.034 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:05:35.034 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:05:35.034 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:05:35.034 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:05:35.034 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:05:35.034 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:05:35.034 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:05:35.034 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:05:35.034 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:05:35.034 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:05:35.034 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:05:35.034 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:05:35.034 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:05:35.034 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:05:35.034 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:05:35.034 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:05:35.034 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:05:35.034 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:05:35.034 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:05:35.034 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:05:35.034 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:05:35.034 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:05:35.034 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:05:35.034 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:05:35.034 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:05:35.034 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:05:35.034 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:05:35.034 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:05:35.034 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:05:35.034 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:05:35.034 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:05:35.034 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:05:35.034 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:05:35.034 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:05:35.034 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:05:37.568 18:23:37 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:05:37.568 18:23:37 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:37.568 18:23:37 -- common/autotest_common.sh@10 -- # set +x 00:05:37.568 18:23:37 -- spdk/autotest.sh@91 -- # rm -f 00:05:37.568 18:23:37 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:38.137 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:38.711 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:38.711 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:38.711 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:05:38.982 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:05:38.982 18:23:38 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:05:38.982 18:23:38 -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:05:38.982 18:23:38 -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:05:38.982 18:23:38 -- common/autotest_common.sh@1666 -- # local nvme bdf 00:05:38.982 18:23:38 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:38.982 18:23:38 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:05:38.982 18:23:38 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:05:38.982 18:23:38 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:38.982 18:23:38 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:38.982 18:23:38 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:38.982 18:23:38 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:05:38.982 18:23:38 -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:05:38.982 18:23:38 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:38.982 18:23:38 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:38.982 18:23:38 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:38.982 18:23:38 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n2 00:05:38.982 18:23:38 -- common/autotest_common.sh@1658 -- # local device=nvme1n2 00:05:38.982 18:23:38 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:38.982 18:23:38 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:38.982 18:23:38 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:38.982 18:23:38 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n3 00:05:38.982 18:23:38 -- common/autotest_common.sh@1658 -- # local device=nvme1n3 00:05:38.982 18:23:38 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:38.982 18:23:38 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:38.982 18:23:38 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:38.982 18:23:38 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n1 00:05:38.982 18:23:38 -- common/autotest_common.sh@1658 -- # local device=nvme2n1 00:05:38.982 18:23:38 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:38.982 18:23:38 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:38.982 18:23:38 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:38.982 18:23:38 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme3c3n1 00:05:38.982 18:23:38 -- common/autotest_common.sh@1658 -- # local device=nvme3c3n1 00:05:38.982 18:23:38 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:05:38.982 18:23:38 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:38.982 18:23:38 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:38.982 18:23:38 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme3n1 00:05:38.982 18:23:38 -- common/autotest_common.sh@1658 -- # local device=nvme3n1 00:05:38.982 18:23:38 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:05:38.982 18:23:38 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:38.982 18:23:38 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:05:38.982 18:23:38 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:38.982 18:23:38 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:38.982 18:23:38 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:05:38.982 18:23:38 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:05:38.982 18:23:38 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:38.982 No valid GPT data, bailing 00:05:38.982 18:23:38 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:38.982 18:23:38 -- scripts/common.sh@391 -- # pt= 00:05:38.982 18:23:38 -- scripts/common.sh@392 -- # return 1 00:05:38.982 18:23:38 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:38.982 1+0 records in 00:05:38.982 1+0 records out 00:05:38.982 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00637103 s, 165 MB/s 00:05:38.982 18:23:38 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:38.982 18:23:38 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:38.982 18:23:38 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:05:38.982 18:23:38 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:05:38.982 18:23:38 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:38.982 No valid GPT data, bailing 00:05:38.982 18:23:38 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:38.982 18:23:38 -- scripts/common.sh@391 -- # pt= 00:05:38.982 18:23:38 -- scripts/common.sh@392 -- # return 1 00:05:38.982 18:23:38 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:38.982 1+0 records in 00:05:38.982 1+0 records out 00:05:38.982 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0062714 s, 167 MB/s 00:05:38.982 18:23:38 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:38.982 18:23:38 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:38.982 18:23:38 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:05:38.982 18:23:38 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:05:38.982 18:23:38 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:38.982 No valid GPT data, bailing 00:05:38.982 18:23:39 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:39.242 18:23:39 -- scripts/common.sh@391 -- # pt= 00:05:39.242 18:23:39 -- scripts/common.sh@392 -- # return 1 00:05:39.242 18:23:39 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:39.242 1+0 records in 00:05:39.242 1+0 records out 00:05:39.242 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00629404 s, 167 MB/s 00:05:39.242 18:23:39 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:39.242 18:23:39 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:39.242 18:23:39 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:05:39.242 18:23:39 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:05:39.242 18:23:39 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:39.242 No valid GPT data, bailing 00:05:39.242 18:23:39 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:39.242 18:23:39 -- scripts/common.sh@391 -- # pt= 00:05:39.242 18:23:39 -- scripts/common.sh@392 -- # return 1 00:05:39.242 18:23:39 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:39.242 1+0 records in 00:05:39.242 1+0 records out 00:05:39.242 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00637992 s, 164 MB/s 00:05:39.242 18:23:39 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:39.242 18:23:39 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:39.242 18:23:39 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n1 00:05:39.242 18:23:39 -- scripts/common.sh@378 -- # local block=/dev/nvme2n1 pt 00:05:39.242 18:23:39 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:05:39.242 No valid GPT data, bailing 00:05:39.242 18:23:39 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:05:39.242 18:23:39 -- scripts/common.sh@391 -- # pt= 00:05:39.242 18:23:39 -- scripts/common.sh@392 -- # return 1 00:05:39.242 18:23:39 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:05:39.242 1+0 records in 00:05:39.242 1+0 records out 00:05:39.242 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0158903 s, 66.0 MB/s 00:05:39.242 18:23:39 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:39.242 18:23:39 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:39.242 18:23:39 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme3n1 00:05:39.242 18:23:39 -- scripts/common.sh@378 -- # local block=/dev/nvme3n1 pt 00:05:39.242 18:23:39 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:05:39.242 No valid GPT data, bailing 00:05:39.242 18:23:39 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:05:39.242 18:23:39 -- scripts/common.sh@391 -- # pt= 00:05:39.242 18:23:39 -- scripts/common.sh@392 -- # return 1 00:05:39.502 18:23:39 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:05:39.502 1+0 records in 00:05:39.502 1+0 records out 00:05:39.502 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00611426 s, 171 MB/s 00:05:39.502 18:23:39 -- spdk/autotest.sh@118 -- # sync 00:05:39.502 18:23:39 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:39.502 18:23:39 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:39.502 18:23:39 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:42.040 18:23:41 -- spdk/autotest.sh@124 -- # uname -s 00:05:42.040 18:23:41 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:05:42.040 18:23:41 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:42.040 18:23:41 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:42.040 18:23:41 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:42.040 18:23:41 -- common/autotest_common.sh@10 -- # set +x 00:05:42.040 ************************************ 00:05:42.040 START TEST setup.sh 00:05:42.040 ************************************ 00:05:42.040 18:23:41 setup.sh -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:42.040 * Looking for test storage... 00:05:42.040 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:42.040 18:23:42 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:05:42.040 18:23:42 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:05:42.040 18:23:42 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:42.040 18:23:42 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:42.040 18:23:42 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:42.040 18:23:42 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:42.040 ************************************ 00:05:42.040 START TEST acl 00:05:42.040 ************************************ 00:05:42.040 18:23:42 setup.sh.acl -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:42.299 * Looking for test storage... 00:05:42.299 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:42.299 18:23:42 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:05:42.299 18:23:42 setup.sh.acl -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:05:42.299 18:23:42 setup.sh.acl -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:05:42.299 18:23:42 setup.sh.acl -- common/autotest_common.sh@1666 -- # local nvme bdf 00:05:42.299 18:23:42 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:42.299 18:23:42 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:05:42.299 18:23:42 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:05:42.299 18:23:42 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:42.299 18:23:42 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:42.299 18:23:42 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:42.299 18:23:42 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:05:42.299 18:23:42 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:05:42.299 18:23:42 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:42.299 18:23:42 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:42.299 18:23:42 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:42.299 18:23:42 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n2 00:05:42.299 18:23:42 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme1n2 00:05:42.299 18:23:42 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:42.299 18:23:42 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:42.299 18:23:42 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:42.299 18:23:42 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n3 00:05:42.299 18:23:42 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme1n3 00:05:42.299 18:23:42 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:42.299 18:23:42 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:42.299 18:23:42 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:42.299 18:23:42 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n1 00:05:42.299 18:23:42 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme2n1 00:05:42.299 18:23:42 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:42.299 18:23:42 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:42.299 18:23:42 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:42.299 18:23:42 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme3c3n1 00:05:42.299 18:23:42 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme3c3n1 00:05:42.299 18:23:42 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:05:42.299 18:23:42 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:42.299 18:23:42 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:42.299 18:23:42 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme3n1 00:05:42.299 18:23:42 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme3n1 00:05:42.299 18:23:42 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:05:42.299 18:23:42 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:42.299 18:23:42 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:05:42.299 18:23:42 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:05:42.299 18:23:42 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:05:42.299 18:23:42 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:05:42.299 18:23:42 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:05:42.299 18:23:42 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:42.299 18:23:42 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:43.688 18:23:43 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:05:43.688 18:23:43 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:05:43.688 18:23:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:43.688 18:23:43 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:05:43.688 18:23:43 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:05:43.688 18:23:43 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:44.287 18:23:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:05:44.287 18:23:44 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:44.287 18:23:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:44.854 Hugepages 00:05:44.854 node hugesize free / total 00:05:44.854 18:23:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:44.854 18:23:44 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:44.854 18:23:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:44.854 00:05:44.854 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:44.854 18:23:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:44.854 18:23:44 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:44.854 18:23:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:44.854 18:23:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:05:44.854 18:23:44 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:05:44.854 18:23:44 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:44.854 18:23:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:45.114 18:23:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:05:45.114 18:23:44 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:45.114 18:23:44 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:05:45.114 18:23:44 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:45.114 18:23:44 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:45.114 18:23:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:45.114 18:23:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:05:45.114 18:23:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:45.114 18:23:45 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:05:45.114 18:23:45 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:45.114 18:23:45 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:45.114 18:23:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:45.374 18:23:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:12.0 == *:*:*.* ]] 00:05:45.374 18:23:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:45.374 18:23:45 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:05:45.374 18:23:45 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:45.374 18:23:45 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:45.374 18:23:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:45.374 18:23:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:13.0 == *:*:*.* ]] 00:05:45.374 18:23:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:45.374 18:23:45 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\3\.\0* ]] 00:05:45.374 18:23:45 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:45.374 18:23:45 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:45.374 18:23:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:45.374 18:23:45 setup.sh.acl -- setup/acl.sh@24 -- # (( 4 > 0 )) 00:05:45.374 18:23:45 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:05:45.374 18:23:45 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:45.374 18:23:45 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:45.374 18:23:45 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:45.374 ************************************ 00:05:45.374 START TEST denied 00:05:45.374 ************************************ 00:05:45.374 18:23:45 setup.sh.acl.denied -- common/autotest_common.sh@1121 -- # denied 00:05:45.374 18:23:45 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:05:45.374 18:23:45 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:05:45.374 18:23:45 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:05:45.374 18:23:45 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:05:45.374 18:23:45 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:47.279 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:05:47.279 18:23:46 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:05:47.279 18:23:46 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:05:47.279 18:23:46 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:05:47.279 18:23:46 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:05:47.279 18:23:46 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:05:47.279 18:23:46 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:47.279 18:23:46 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:47.279 18:23:46 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:05:47.279 18:23:46 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:47.279 18:23:46 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:53.941 00:05:53.941 real 0m7.684s 00:05:53.941 user 0m0.886s 00:05:53.941 sys 0m1.905s 00:05:53.941 18:23:53 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:53.941 18:23:53 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:05:53.941 ************************************ 00:05:53.941 END TEST denied 00:05:53.941 ************************************ 00:05:53.941 18:23:53 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:05:53.941 18:23:53 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:53.941 18:23:53 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:53.941 18:23:53 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:53.941 ************************************ 00:05:53.941 START TEST allowed 00:05:53.941 ************************************ 00:05:53.941 18:23:53 setup.sh.acl.allowed -- common/autotest_common.sh@1121 -- # allowed 00:05:53.941 18:23:53 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:05:53.941 18:23:53 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:05:53.941 18:23:53 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:05:53.941 18:23:53 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:05:53.941 18:23:53 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:54.511 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:54.511 18:23:54 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:54.511 18:23:54 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:05:54.511 18:23:54 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:05:54.511 18:23:54 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:05:54.511 18:23:54 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:05:54.511 18:23:54 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:54.511 18:23:54 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:54.511 18:23:54 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:05:54.511 18:23:54 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:12.0 ]] 00:05:54.511 18:23:54 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:12.0/driver 00:05:54.511 18:23:54 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:54.511 18:23:54 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:54.511 18:23:54 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:05:54.511 18:23:54 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:13.0 ]] 00:05:54.511 18:23:54 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:13.0/driver 00:05:54.511 18:23:54 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:54.511 18:23:54 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:54.511 18:23:54 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:05:54.511 18:23:54 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:54.511 18:23:54 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:55.891 00:05:55.891 real 0m2.760s 00:05:55.891 user 0m1.079s 00:05:55.891 sys 0m1.697s 00:05:55.891 18:23:55 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:55.891 18:23:55 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:05:55.891 ************************************ 00:05:55.891 END TEST allowed 00:05:55.891 ************************************ 00:05:55.891 00:05:55.891 real 0m13.847s 00:05:55.891 user 0m3.328s 00:05:55.891 sys 0m5.661s 00:05:55.891 18:23:55 setup.sh.acl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:55.891 18:23:55 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:55.891 ************************************ 00:05:55.891 END TEST acl 00:05:55.891 ************************************ 00:05:56.187 18:23:55 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:56.187 18:23:55 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:56.187 18:23:55 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:56.187 18:23:55 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:56.187 ************************************ 00:05:56.187 START TEST hugepages 00:05:56.187 ************************************ 00:05:56.187 18:23:55 setup.sh.hugepages -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:56.187 * Looking for test storage... 00:05:56.187 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:56.187 18:23:56 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:56.187 18:23:56 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:56.187 18:23:56 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:56.187 18:23:56 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:56.187 18:23:56 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:56.187 18:23:56 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:56.187 18:23:56 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:56.187 18:23:56 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:05:56.187 18:23:56 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:05:56.187 18:23:56 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:05:56.187 18:23:56 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:56.187 18:23:56 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:56.187 18:23:56 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:56.187 18:23:56 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:05:56.187 18:23:56 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:56.187 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:56.187 18:23:56 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 4697244 kB' 'MemAvailable: 7387444 kB' 'Buffers: 2436 kB' 'Cached: 2894144 kB' 'SwapCached: 0 kB' 'Active: 449164 kB' 'Inactive: 2554040 kB' 'Active(anon): 117140 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2554040 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 108496 kB' 'Mapped: 48780 kB' 'Shmem: 10516 kB' 'KReclaimable: 82152 kB' 'Slab: 163716 kB' 'SReclaimable: 82152 kB' 'SUnreclaim: 81564 kB' 'KernelStack: 6428 kB' 'PageTables: 3760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412440 kB' 'Committed_AS: 331972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55016 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:05:56.187 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:56.187 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:56.187 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:56.187 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:56.187 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:56.187 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:56.187 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:56.187 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:56.187 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:56.187 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:56.187 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:56.187 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:56.187 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:56.187 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:56.187 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:56.187 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:56.187 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:56.187 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:56.187 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:56.187 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:56.187 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:56.187 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:56.187 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:56.187 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:56.187 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:56.187 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:56.187 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:56.187 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:56.187 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:56.187 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:56.187 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:56.187 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:56.187 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:56.187 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:56.187 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:56.187 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:56.187 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:56.187 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:56.187 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:56.187 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:56.187 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:56.187 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:56.187 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:56.187 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:56.187 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:56.187 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:56.187 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:56.187 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:56.187 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:56.187 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:56.187 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:56.187 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:56.187 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:56.187 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:56.187 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:56.187 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:56.187 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:56.187 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:56.187 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:56.187 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:56.187 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:56.187 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:56.187 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:56.188 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:56.189 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:56.189 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:56.189 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:56.189 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:56.189 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:56.189 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:56.189 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:56.189 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:56.189 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:56.189 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:56.189 18:23:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:56.189 18:23:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:56.189 18:23:56 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:05:56.189 18:23:56 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:05:56.189 18:23:56 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:56.189 18:23:56 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:56.189 18:23:56 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:56.189 18:23:56 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:56.189 18:23:56 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:56.189 18:23:56 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:56.189 18:23:56 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:56.189 18:23:56 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:05:56.189 18:23:56 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:05:56.189 18:23:56 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:56.189 18:23:56 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:56.189 18:23:56 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:56.189 18:23:56 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:56.189 18:23:56 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:05:56.189 18:23:56 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:56.189 18:23:56 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:56.189 18:23:56 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:56.189 18:23:56 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:56.189 18:23:56 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:56.189 18:23:56 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:56.189 18:23:56 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:56.189 18:23:56 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:56.189 18:23:56 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:56.189 18:23:56 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:56.189 18:23:56 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:56.189 18:23:56 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:56.189 ************************************ 00:05:56.189 START TEST default_setup 00:05:56.189 ************************************ 00:05:56.189 18:23:56 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1121 -- # default_setup 00:05:56.189 18:23:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:56.189 18:23:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:05:56.189 18:23:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:56.189 18:23:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:05:56.189 18:23:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:56.189 18:23:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:05:56.189 18:23:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:56.189 18:23:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:56.189 18:23:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:56.189 18:23:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:56.189 18:23:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:05:56.189 18:23:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:56.189 18:23:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:56.189 18:23:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:56.189 18:23:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:56.189 18:23:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:56.189 18:23:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:56.189 18:23:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:56.189 18:23:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:05:56.189 18:23:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:05:56.189 18:23:56 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:05:56.189 18:23:56 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:56.765 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:57.710 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:57.710 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:57.710 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:57.710 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:57.710 18:23:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:57.710 18:23:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:05:57.710 18:23:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:05:57.710 18:23:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:05:57.710 18:23:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:05:57.710 18:23:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:05:57.710 18:23:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:05:57.710 18:23:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:57.710 18:23:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:57.710 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:57.710 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:57.710 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:57.710 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:57.710 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:57.710 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:57.710 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:57.710 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:57.710 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:57.710 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.710 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6813040 kB' 'MemAvailable: 9503036 kB' 'Buffers: 2436 kB' 'Cached: 2894132 kB' 'SwapCached: 0 kB' 'Active: 462056 kB' 'Inactive: 2554064 kB' 'Active(anon): 130032 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2554064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'AnonPages: 121120 kB' 'Mapped: 48840 kB' 'Shmem: 10476 kB' 'KReclaimable: 81692 kB' 'Slab: 162900 kB' 'SReclaimable: 81692 kB' 'SUnreclaim: 81208 kB' 'KernelStack: 6448 kB' 'PageTables: 4020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 348848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55080 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:05:57.710 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.710 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.710 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.710 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.710 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.710 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.710 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.710 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.710 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.710 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.710 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.710 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.710 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.710 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.710 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.710 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.710 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.710 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.710 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.710 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.710 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.710 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.710 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.710 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.710 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.710 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.710 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.710 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.710 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.710 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.710 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.710 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.710 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.710 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.710 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.710 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.710 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.710 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.710 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.710 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.710 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.710 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.710 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.710 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:05:57.711 18:23:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6812788 kB' 'MemAvailable: 9502784 kB' 'Buffers: 2436 kB' 'Cached: 2894132 kB' 'SwapCached: 0 kB' 'Active: 461656 kB' 'Inactive: 2554064 kB' 'Active(anon): 129632 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2554064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'AnonPages: 120760 kB' 'Mapped: 48720 kB' 'Shmem: 10476 kB' 'KReclaimable: 81692 kB' 'Slab: 162900 kB' 'SReclaimable: 81692 kB' 'SUnreclaim: 81208 kB' 'KernelStack: 6480 kB' 'PageTables: 4092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 348848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55080 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.712 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:57.713 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6813460 kB' 'MemAvailable: 9503456 kB' 'Buffers: 2436 kB' 'Cached: 2894132 kB' 'SwapCached: 0 kB' 'Active: 461724 kB' 'Inactive: 2554064 kB' 'Active(anon): 129700 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2554064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'AnonPages: 120876 kB' 'Mapped: 48720 kB' 'Shmem: 10476 kB' 'KReclaimable: 81692 kB' 'Slab: 162900 kB' 'SReclaimable: 81692 kB' 'SUnreclaim: 81208 kB' 'KernelStack: 6496 kB' 'PageTables: 4140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 348848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55080 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.714 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.715 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:05:57.716 nr_hugepages=1024 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:57.716 resv_hugepages=0 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:57.716 surplus_hugepages=0 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:57.716 anon_hugepages=0 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6813208 kB' 'MemAvailable: 9503204 kB' 'Buffers: 2436 kB' 'Cached: 2894132 kB' 'SwapCached: 0 kB' 'Active: 461516 kB' 'Inactive: 2554064 kB' 'Active(anon): 129492 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2554064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'AnonPages: 120616 kB' 'Mapped: 48720 kB' 'Shmem: 10476 kB' 'KReclaimable: 81692 kB' 'Slab: 162900 kB' 'SReclaimable: 81692 kB' 'SUnreclaim: 81208 kB' 'KernelStack: 6496 kB' 'PageTables: 4140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 348848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55080 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.716 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:57.717 18:23:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6813208 kB' 'MemUsed: 5428768 kB' 'SwapCached: 0 kB' 'Active: 461516 kB' 'Inactive: 2554064 kB' 'Active(anon): 129492 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2554064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'FilePages: 2896568 kB' 'Mapped: 48720 kB' 'AnonPages: 120616 kB' 'Shmem: 10476 kB' 'KernelStack: 6496 kB' 'PageTables: 4140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 81692 kB' 'Slab: 162900 kB' 'SReclaimable: 81692 kB' 'SUnreclaim: 81208 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.718 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.719 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.719 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.719 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.719 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.719 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.719 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.719 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.719 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.719 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.719 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.719 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.719 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.719 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.719 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.719 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.719 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.719 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.719 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.719 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.719 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.719 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.719 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.719 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.719 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.719 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.719 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.719 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.719 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.719 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.719 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.719 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.719 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.719 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.719 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.719 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.719 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.719 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.719 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.719 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.719 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.719 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.719 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.719 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:57.719 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:57.719 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:57.719 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.719 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:57.719 18:23:57 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:57.719 18:23:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:57.719 18:23:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:57.719 18:23:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:57.719 18:23:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:57.719 node0=1024 expecting 1024 00:05:57.719 18:23:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:57.719 18:23:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:57.719 00:05:57.719 real 0m1.579s 00:05:57.719 user 0m0.606s 00:05:57.719 sys 0m0.951s 00:05:57.719 18:23:57 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:57.719 18:23:57 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:05:57.719 ************************************ 00:05:57.719 END TEST default_setup 00:05:57.719 ************************************ 00:05:57.979 18:23:57 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:57.979 18:23:57 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:57.979 18:23:57 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:57.979 18:23:57 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:57.979 ************************************ 00:05:57.979 START TEST per_node_1G_alloc 00:05:57.979 ************************************ 00:05:57.979 18:23:57 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1121 -- # per_node_1G_alloc 00:05:57.979 18:23:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:05:57.979 18:23:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:05:57.979 18:23:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:57.979 18:23:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:57.979 18:23:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:05:57.979 18:23:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:57.979 18:23:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:57.979 18:23:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:57.979 18:23:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:57.979 18:23:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:57.979 18:23:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:57.979 18:23:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:57.979 18:23:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:57.979 18:23:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:57.979 18:23:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:57.979 18:23:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:57.979 18:23:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:57.979 18:23:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:57.979 18:23:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:57.979 18:23:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:57.979 18:23:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:57.979 18:23:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:05:57.979 18:23:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:05:57.979 18:23:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:57.979 18:23:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:58.239 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:58.499 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:58.499 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:58.499 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:58.499 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:58.499 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:05:58.499 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:58.499 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:58.499 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:58.499 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:58.499 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:58.499 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:58.499 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7863452 kB' 'MemAvailable: 10553456 kB' 'Buffers: 2436 kB' 'Cached: 2894136 kB' 'SwapCached: 0 kB' 'Active: 461720 kB' 'Inactive: 2554072 kB' 'Active(anon): 129696 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2554072 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 120756 kB' 'Mapped: 48720 kB' 'Shmem: 10476 kB' 'KReclaimable: 81692 kB' 'Slab: 163004 kB' 'SReclaimable: 81692 kB' 'SUnreclaim: 81312 kB' 'KernelStack: 6464 kB' 'PageTables: 4048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 348848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55128 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.500 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.501 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.501 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.501 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.501 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.501 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.501 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.501 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.501 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.501 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.501 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.501 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.501 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.501 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.501 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.501 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.501 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.501 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.501 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.501 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.501 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.501 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.501 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.501 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.501 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.501 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.501 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.501 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.501 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.501 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.501 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.501 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.501 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.501 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.501 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.501 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.501 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.501 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.501 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.501 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.501 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.501 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.501 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.501 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.501 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.501 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.501 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.501 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.501 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.501 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.501 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.501 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.501 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.501 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.501 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.501 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.501 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.501 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.501 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.501 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.501 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.501 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.501 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.501 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.501 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.501 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.501 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.501 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.501 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.501 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.501 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:58.501 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:58.501 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:58.765 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:58.765 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:58.765 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:58.765 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:58.765 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:58.765 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:58.765 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:58.765 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:58.765 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:58.765 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:58.765 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.765 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.765 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7863452 kB' 'MemAvailable: 10553456 kB' 'Buffers: 2436 kB' 'Cached: 2894136 kB' 'SwapCached: 0 kB' 'Active: 461684 kB' 'Inactive: 2554072 kB' 'Active(anon): 129660 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2554072 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 120752 kB' 'Mapped: 48720 kB' 'Shmem: 10476 kB' 'KReclaimable: 81692 kB' 'Slab: 163004 kB' 'SReclaimable: 81692 kB' 'SUnreclaim: 81312 kB' 'KernelStack: 6464 kB' 'PageTables: 4048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 348848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55112 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:05:58.765 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.765 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.765 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.765 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.765 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.765 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.765 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.765 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.765 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.765 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.765 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.765 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.765 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.765 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.765 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.765 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.765 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.765 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.765 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.766 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7863452 kB' 'MemAvailable: 10553456 kB' 'Buffers: 2436 kB' 'Cached: 2894136 kB' 'SwapCached: 0 kB' 'Active: 461660 kB' 'Inactive: 2554072 kB' 'Active(anon): 129636 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2554072 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 120720 kB' 'Mapped: 48720 kB' 'Shmem: 10476 kB' 'KReclaimable: 81692 kB' 'Slab: 163004 kB' 'SReclaimable: 81692 kB' 'SUnreclaim: 81312 kB' 'KernelStack: 6464 kB' 'PageTables: 4048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 348848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55128 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.767 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.768 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:58.769 nr_hugepages=512 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:58.769 resv_hugepages=0 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:58.769 surplus_hugepages=0 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:58.769 anon_hugepages=0 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7862948 kB' 'MemAvailable: 10552952 kB' 'Buffers: 2436 kB' 'Cached: 2894136 kB' 'SwapCached: 0 kB' 'Active: 461872 kB' 'Inactive: 2554072 kB' 'Active(anon): 129848 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2554072 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 120932 kB' 'Mapped: 48720 kB' 'Shmem: 10476 kB' 'KReclaimable: 81692 kB' 'Slab: 163004 kB' 'SReclaimable: 81692 kB' 'SUnreclaim: 81312 kB' 'KernelStack: 6448 kB' 'PageTables: 4000 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 348848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55128 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.769 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.770 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7863340 kB' 'MemUsed: 4378636 kB' 'SwapCached: 0 kB' 'Active: 461512 kB' 'Inactive: 2554072 kB' 'Active(anon): 129488 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2554072 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'FilePages: 2896572 kB' 'Mapped: 48720 kB' 'AnonPages: 120580 kB' 'Shmem: 10476 kB' 'KernelStack: 6464 kB' 'PageTables: 4056 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 81692 kB' 'Slab: 163004 kB' 'SReclaimable: 81692 kB' 'SUnreclaim: 81312 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.771 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.772 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.773 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.773 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.773 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.773 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.773 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.773 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.773 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.773 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.773 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.773 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.773 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.773 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.773 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.773 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.773 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.773 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.773 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.773 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.773 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:58.773 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.773 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.773 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.773 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:58.773 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:58.773 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:58.773 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:58.773 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:58.773 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:58.773 node0=512 expecting 512 00:05:58.773 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:58.773 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:58.773 00:05:58.773 real 0m0.858s 00:05:58.773 user 0m0.364s 00:05:58.773 sys 0m0.540s 00:05:58.773 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:58.773 18:23:58 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:58.773 ************************************ 00:05:58.773 END TEST per_node_1G_alloc 00:05:58.773 ************************************ 00:05:58.773 18:23:58 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:58.773 18:23:58 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:58.773 18:23:58 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:58.773 18:23:58 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:58.773 ************************************ 00:05:58.773 START TEST even_2G_alloc 00:05:58.773 ************************************ 00:05:58.773 18:23:58 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1121 -- # even_2G_alloc 00:05:58.773 18:23:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:58.773 18:23:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:58.773 18:23:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:58.773 18:23:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:58.773 18:23:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:58.773 18:23:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:58.773 18:23:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:58.773 18:23:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:58.773 18:23:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:58.773 18:23:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:58.773 18:23:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:58.773 18:23:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:58.773 18:23:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:58.773 18:23:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:58.773 18:23:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:58.773 18:23:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:05:58.773 18:23:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:58.773 18:23:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:58.773 18:23:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:58.773 18:23:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:58.773 18:23:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:58.773 18:23:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:05:58.773 18:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:58.773 18:23:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:59.343 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:59.343 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:59.343 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:59.343 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:59.343 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:59.343 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:59.343 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:59.343 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:59.607 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:59.607 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:59.607 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:59.607 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:59.607 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:59.607 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:59.607 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6818024 kB' 'MemAvailable: 9508028 kB' 'Buffers: 2436 kB' 'Cached: 2894136 kB' 'SwapCached: 0 kB' 'Active: 462088 kB' 'Inactive: 2554072 kB' 'Active(anon): 130064 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2554072 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 121160 kB' 'Mapped: 48844 kB' 'Shmem: 10476 kB' 'KReclaimable: 81692 kB' 'Slab: 163040 kB' 'SReclaimable: 81692 kB' 'SUnreclaim: 81348 kB' 'KernelStack: 6472 kB' 'PageTables: 3952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 348976 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55128 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.608 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6817772 kB' 'MemAvailable: 9507776 kB' 'Buffers: 2436 kB' 'Cached: 2894136 kB' 'SwapCached: 0 kB' 'Active: 462048 kB' 'Inactive: 2554072 kB' 'Active(anon): 130024 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2554072 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 121120 kB' 'Mapped: 48720 kB' 'Shmem: 10476 kB' 'KReclaimable: 81692 kB' 'Slab: 163088 kB' 'SReclaimable: 81692 kB' 'SUnreclaim: 81396 kB' 'KernelStack: 6480 kB' 'PageTables: 4104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 348976 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55096 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.609 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.610 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6817772 kB' 'MemAvailable: 9507776 kB' 'Buffers: 2436 kB' 'Cached: 2894136 kB' 'SwapCached: 0 kB' 'Active: 461716 kB' 'Inactive: 2554072 kB' 'Active(anon): 129692 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2554072 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 120832 kB' 'Mapped: 48720 kB' 'Shmem: 10476 kB' 'KReclaimable: 81692 kB' 'Slab: 163088 kB' 'SReclaimable: 81692 kB' 'SUnreclaim: 81396 kB' 'KernelStack: 6496 kB' 'PageTables: 4152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 348976 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55096 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.611 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.612 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:59.613 nr_hugepages=1024 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:59.613 resv_hugepages=0 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:59.613 surplus_hugepages=0 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:59.613 anon_hugepages=0 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6817768 kB' 'MemAvailable: 9507772 kB' 'Buffers: 2436 kB' 'Cached: 2894136 kB' 'SwapCached: 0 kB' 'Active: 461672 kB' 'Inactive: 2554072 kB' 'Active(anon): 129648 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2554072 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 120788 kB' 'Mapped: 48720 kB' 'Shmem: 10476 kB' 'KReclaimable: 81692 kB' 'Slab: 163076 kB' 'SReclaimable: 81692 kB' 'SUnreclaim: 81384 kB' 'KernelStack: 6480 kB' 'PageTables: 4104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 348976 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55096 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.613 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.614 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6817768 kB' 'MemUsed: 5424208 kB' 'SwapCached: 0 kB' 'Active: 461856 kB' 'Inactive: 2554072 kB' 'Active(anon): 129832 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2554072 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'FilePages: 2896572 kB' 'Mapped: 48720 kB' 'AnonPages: 120924 kB' 'Shmem: 10476 kB' 'KernelStack: 6448 kB' 'PageTables: 4008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 81692 kB' 'Slab: 163076 kB' 'SReclaimable: 81692 kB' 'SUnreclaim: 81384 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.615 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:59.616 node0=1024 expecting 1024 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:59.616 00:05:59.616 real 0m0.844s 00:05:59.616 user 0m0.369s 00:05:59.616 sys 0m0.524s 00:05:59.616 18:23:59 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:59.617 18:23:59 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:59.617 ************************************ 00:05:59.617 END TEST even_2G_alloc 00:05:59.617 ************************************ 00:05:59.617 18:23:59 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:59.617 18:23:59 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:59.617 18:23:59 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:59.617 18:23:59 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:59.617 ************************************ 00:05:59.617 START TEST odd_alloc 00:05:59.617 ************************************ 00:05:59.617 18:23:59 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1121 -- # odd_alloc 00:05:59.617 18:23:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:59.617 18:23:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:05:59.617 18:23:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:59.617 18:23:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:59.617 18:23:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:59.617 18:23:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:59.617 18:23:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:59.617 18:23:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:59.617 18:23:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:59.617 18:23:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:59.617 18:23:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:59.617 18:23:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:59.617 18:23:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:59.617 18:23:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:59.617 18:23:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:59.617 18:23:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:05:59.617 18:23:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:59.617 18:23:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:59.617 18:23:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:59.617 18:23:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:59.617 18:23:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:59.617 18:23:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:05:59.617 18:23:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:59.617 18:23:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:00.186 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:00.450 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:00.450 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:00.450 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:00.450 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:00.450 18:24:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:06:00.450 18:24:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:06:00.450 18:24:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:06:00.450 18:24:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:06:00.450 18:24:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:06:00.450 18:24:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:06:00.450 18:24:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:06:00.450 18:24:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:00.450 18:24:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:00.450 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:00.450 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:06:00.450 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:00.450 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:00.450 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:00.450 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:00.450 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:00.450 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:00.450 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:00.450 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.450 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.450 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6815444 kB' 'MemAvailable: 9505452 kB' 'Buffers: 2436 kB' 'Cached: 2894140 kB' 'SwapCached: 0 kB' 'Active: 461732 kB' 'Inactive: 2554076 kB' 'Active(anon): 129708 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2554076 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 120768 kB' 'Mapped: 48720 kB' 'Shmem: 10476 kB' 'KReclaimable: 81692 kB' 'Slab: 163196 kB' 'SReclaimable: 81692 kB' 'SUnreclaim: 81504 kB' 'KernelStack: 6448 kB' 'PageTables: 4016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 348976 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55128 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:06:00.450 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.450 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.450 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.450 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.450 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.450 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.450 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.450 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.450 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.450 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.450 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.450 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.450 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.450 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.450 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.450 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.450 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.450 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.450 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.450 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.450 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.450 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.451 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6815456 kB' 'MemAvailable: 9505464 kB' 'Buffers: 2436 kB' 'Cached: 2894140 kB' 'SwapCached: 0 kB' 'Active: 461696 kB' 'Inactive: 2554076 kB' 'Active(anon): 129672 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2554076 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 120768 kB' 'Mapped: 48720 kB' 'Shmem: 10476 kB' 'KReclaimable: 81692 kB' 'Slab: 163192 kB' 'SReclaimable: 81692 kB' 'SUnreclaim: 81500 kB' 'KernelStack: 6448 kB' 'PageTables: 4016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 348976 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55112 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.452 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.453 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6815456 kB' 'MemAvailable: 9505464 kB' 'Buffers: 2436 kB' 'Cached: 2894140 kB' 'SwapCached: 0 kB' 'Active: 461612 kB' 'Inactive: 2554076 kB' 'Active(anon): 129588 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2554076 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 120688 kB' 'Mapped: 48720 kB' 'Shmem: 10476 kB' 'KReclaimable: 81692 kB' 'Slab: 163188 kB' 'SReclaimable: 81692 kB' 'SUnreclaim: 81496 kB' 'KernelStack: 6448 kB' 'PageTables: 4016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 348976 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55128 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.454 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.455 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:06:00.456 nr_hugepages=1025 00:06:00.456 resv_hugepages=0 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:00.456 surplus_hugepages=0 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:00.456 anon_hugepages=0 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6815456 kB' 'MemAvailable: 9505464 kB' 'Buffers: 2436 kB' 'Cached: 2894140 kB' 'SwapCached: 0 kB' 'Active: 461608 kB' 'Inactive: 2554076 kB' 'Active(anon): 129584 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2554076 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 120720 kB' 'Mapped: 48720 kB' 'Shmem: 10476 kB' 'KReclaimable: 81692 kB' 'Slab: 163188 kB' 'SReclaimable: 81692 kB' 'SUnreclaim: 81496 kB' 'KernelStack: 6448 kB' 'PageTables: 4016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 348976 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55128 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.456 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.457 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6816068 kB' 'MemUsed: 5425908 kB' 'SwapCached: 0 kB' 'Active: 461540 kB' 'Inactive: 2554076 kB' 'Active(anon): 129516 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2554076 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'FilePages: 2896576 kB' 'Mapped: 48720 kB' 'AnonPages: 120624 kB' 'Shmem: 10476 kB' 'KernelStack: 6464 kB' 'PageTables: 4064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 81692 kB' 'Slab: 163176 kB' 'SReclaimable: 81692 kB' 'SUnreclaim: 81484 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.458 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.459 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.459 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.459 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.459 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.459 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.459 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.459 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.459 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.459 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.459 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.459 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.459 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.459 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.459 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.459 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.459 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.459 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.459 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.459 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.459 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.459 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.459 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.459 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.459 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.459 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.459 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.459 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.459 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.459 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.459 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.459 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.459 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.459 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.459 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.459 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.459 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.459 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.459 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.459 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.459 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.459 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.459 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.459 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.459 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.459 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.459 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.459 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.459 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.459 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.459 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.459 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.459 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.459 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.459 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.459 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.459 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:00.459 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:00.459 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:00.459 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.459 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:06:00.459 18:24:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:00.459 18:24:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:00.459 18:24:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:00.459 18:24:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:00.459 18:24:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:00.459 18:24:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:06:00.459 node0=1025 expecting 1025 00:06:00.459 18:24:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:06:00.459 00:06:00.459 real 0m0.861s 00:06:00.459 user 0m0.363s 00:06:00.459 sys 0m0.550s 00:06:00.459 18:24:00 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:00.459 18:24:00 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:06:00.459 ************************************ 00:06:00.459 END TEST odd_alloc 00:06:00.459 ************************************ 00:06:00.720 18:24:00 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:06:00.720 18:24:00 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:00.720 18:24:00 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:00.720 18:24:00 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:00.720 ************************************ 00:06:00.720 START TEST custom_alloc 00:06:00.720 ************************************ 00:06:00.720 18:24:00 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1121 -- # custom_alloc 00:06:00.720 18:24:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:06:00.720 18:24:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:06:00.720 18:24:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:06:00.720 18:24:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:06:00.720 18:24:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:06:00.720 18:24:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:06:00.720 18:24:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:06:00.720 18:24:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:06:00.720 18:24:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:00.720 18:24:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:06:00.720 18:24:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:06:00.720 18:24:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:06:00.720 18:24:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:00.720 18:24:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:06:00.720 18:24:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:06:00.720 18:24:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:00.720 18:24:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:00.720 18:24:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:00.720 18:24:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:06:00.720 18:24:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:00.720 18:24:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:06:00.720 18:24:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:06:00.720 18:24:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:06:00.720 18:24:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:00.720 18:24:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:06:00.720 18:24:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:06:00.720 18:24:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:06:00.720 18:24:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:06:00.720 18:24:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:06:00.720 18:24:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:06:00.720 18:24:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:06:00.720 18:24:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:00.720 18:24:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:06:00.720 18:24:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:06:00.720 18:24:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:00.720 18:24:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:00.720 18:24:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:00.720 18:24:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:06:00.720 18:24:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:06:00.720 18:24:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:06:00.720 18:24:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:06:00.720 18:24:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:06:00.720 18:24:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:06:00.720 18:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:00.720 18:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:01.289 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:01.289 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:01.289 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:01.289 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:01.289 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:01.289 18:24:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:06:01.289 18:24:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:06:01.289 18:24:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:06:01.289 18:24:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:06:01.289 18:24:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:06:01.289 18:24:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:06:01.289 18:24:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:06:01.289 18:24:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:06:01.289 18:24:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:01.289 18:24:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:01.289 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:01.289 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:06:01.289 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:01.289 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:01.289 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:01.289 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:01.289 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:01.289 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:01.289 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:01.289 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.289 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.289 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7867076 kB' 'MemAvailable: 10557084 kB' 'Buffers: 2436 kB' 'Cached: 2894140 kB' 'SwapCached: 0 kB' 'Active: 461460 kB' 'Inactive: 2554076 kB' 'Active(anon): 129436 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2554076 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 118452 kB' 'Mapped: 48316 kB' 'Shmem: 10476 kB' 'KReclaimable: 81692 kB' 'Slab: 163160 kB' 'SReclaimable: 81692 kB' 'SUnreclaim: 81468 kB' 'KernelStack: 6464 kB' 'PageTables: 4064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 346360 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55096 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:06:01.289 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.289 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.289 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.289 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.289 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.289 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.289 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.289 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.289 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.289 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.289 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.289 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.289 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.289 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.289 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.289 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.289 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.289 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.289 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.289 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.289 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.289 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.289 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.289 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.289 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.289 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.289 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.289 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.289 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.289 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.289 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.289 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.289 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.289 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.289 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.289 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.289 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.289 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.289 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.289 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.289 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.289 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.289 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.571 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7873924 kB' 'MemAvailable: 10563928 kB' 'Buffers: 2436 kB' 'Cached: 2894140 kB' 'SwapCached: 0 kB' 'Active: 458460 kB' 'Inactive: 2554076 kB' 'Active(anon): 126436 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2554076 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 117548 kB' 'Mapped: 47984 kB' 'Shmem: 10476 kB' 'KReclaimable: 81688 kB' 'Slab: 163036 kB' 'SReclaimable: 81688 kB' 'SUnreclaim: 81348 kB' 'KernelStack: 6384 kB' 'PageTables: 3696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 335976 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55032 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.572 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.573 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7873924 kB' 'MemAvailable: 10563928 kB' 'Buffers: 2436 kB' 'Cached: 2894140 kB' 'SwapCached: 0 kB' 'Active: 458652 kB' 'Inactive: 2554076 kB' 'Active(anon): 126628 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2554076 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 117768 kB' 'Mapped: 47984 kB' 'Shmem: 10476 kB' 'KReclaimable: 81688 kB' 'Slab: 163032 kB' 'SReclaimable: 81688 kB' 'SUnreclaim: 81344 kB' 'KernelStack: 6384 kB' 'PageTables: 3696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 335976 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55032 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.574 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.575 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:06:01.576 nr_hugepages=512 00:06:01.576 resv_hugepages=0 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:01.576 surplus_hugepages=0 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:01.576 anon_hugepages=0 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7873924 kB' 'MemAvailable: 10563928 kB' 'Buffers: 2436 kB' 'Cached: 2894140 kB' 'SwapCached: 0 kB' 'Active: 458596 kB' 'Inactive: 2554076 kB' 'Active(anon): 126572 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2554076 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 117660 kB' 'Mapped: 47984 kB' 'Shmem: 10476 kB' 'KReclaimable: 81688 kB' 'Slab: 163020 kB' 'SReclaimable: 81688 kB' 'SUnreclaim: 81332 kB' 'KernelStack: 6352 kB' 'PageTables: 3600 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 335976 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55032 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.576 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.577 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7873924 kB' 'MemUsed: 4368052 kB' 'SwapCached: 0 kB' 'Active: 458376 kB' 'Inactive: 2554076 kB' 'Active(anon): 126352 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2554076 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'FilePages: 2896576 kB' 'Mapped: 47984 kB' 'AnonPages: 117464 kB' 'Shmem: 10476 kB' 'KernelStack: 6352 kB' 'PageTables: 3600 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 81688 kB' 'Slab: 163020 kB' 'SReclaimable: 81688 kB' 'SUnreclaim: 81332 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.578 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.579 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.579 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.579 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.579 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.579 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.579 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.579 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.579 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.579 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.579 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.579 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.579 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.579 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.579 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.579 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.579 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.579 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.579 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.579 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.579 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.579 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.579 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.579 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.579 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.579 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.579 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.579 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.579 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.579 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.579 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.579 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.579 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.579 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.579 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.579 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.579 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.579 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.579 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.579 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.579 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.579 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.579 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.579 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.579 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.579 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.579 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.579 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.579 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.579 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.579 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.579 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.579 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.579 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.579 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.579 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.579 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.579 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.579 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.579 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.579 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.579 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.579 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.579 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.579 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.579 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.579 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.579 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.579 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.579 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.579 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.579 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.579 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.579 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.579 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.579 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:01.579 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.579 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.579 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.579 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:06:01.579 18:24:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:01.579 18:24:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:01.579 18:24:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:01.579 18:24:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:01.579 18:24:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:01.579 18:24:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:06:01.579 node0=512 expecting 512 00:06:01.579 18:24:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:06:01.579 00:06:01.579 real 0m0.942s 00:06:01.579 user 0m0.356s 00:06:01.579 sys 0m0.646s 00:06:01.579 18:24:01 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:01.579 18:24:01 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:06:01.579 ************************************ 00:06:01.579 END TEST custom_alloc 00:06:01.579 ************************************ 00:06:01.579 18:24:01 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:06:01.579 18:24:01 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:01.579 18:24:01 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:01.579 18:24:01 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:01.579 ************************************ 00:06:01.579 START TEST no_shrink_alloc 00:06:01.579 ************************************ 00:06:01.579 18:24:01 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1121 -- # no_shrink_alloc 00:06:01.579 18:24:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:06:01.579 18:24:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:06:01.579 18:24:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:06:01.579 18:24:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:06:01.579 18:24:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:06:01.579 18:24:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:06:01.579 18:24:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:01.579 18:24:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:06:01.579 18:24:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:06:01.579 18:24:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:06:01.579 18:24:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:01.579 18:24:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:06:01.579 18:24:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:06:01.579 18:24:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:01.579 18:24:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:01.579 18:24:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:06:01.579 18:24:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:06:01.579 18:24:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:06:01.579 18:24:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:06:01.579 18:24:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:06:01.580 18:24:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:01.580 18:24:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:02.150 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:02.413 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:02.413 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:02.413 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:02.413 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:02.413 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:06:02.413 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:06:02.413 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:06:02.413 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:06:02.413 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:06:02.413 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:06:02.413 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:06:02.413 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:02.413 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:02.413 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:02.413 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:02.413 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:02.413 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:02.413 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:02.413 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:02.413 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:02.413 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:02.413 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:02.413 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.413 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.413 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6821964 kB' 'MemAvailable: 9511968 kB' 'Buffers: 2436 kB' 'Cached: 2894140 kB' 'SwapCached: 0 kB' 'Active: 458944 kB' 'Inactive: 2554076 kB' 'Active(anon): 126920 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2554076 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 117836 kB' 'Mapped: 48124 kB' 'Shmem: 10476 kB' 'KReclaimable: 81688 kB' 'Slab: 162884 kB' 'SReclaimable: 81688 kB' 'SUnreclaim: 81196 kB' 'KernelStack: 6420 kB' 'PageTables: 3936 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335976 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55048 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:06:02.413 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.413 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.413 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.413 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.413 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.413 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.413 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.413 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.413 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.413 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.413 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.413 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.413 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.413 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.413 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.413 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.413 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.413 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.413 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.413 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.413 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.413 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.413 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.413 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.413 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.413 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.413 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.413 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.413 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.413 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.413 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.413 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.414 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6823740 kB' 'MemAvailable: 9513744 kB' 'Buffers: 2436 kB' 'Cached: 2894140 kB' 'SwapCached: 0 kB' 'Active: 458940 kB' 'Inactive: 2554076 kB' 'Active(anon): 126916 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2554076 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 117872 kB' 'Mapped: 48124 kB' 'Shmem: 10476 kB' 'KReclaimable: 81688 kB' 'Slab: 162884 kB' 'SReclaimable: 81688 kB' 'SUnreclaim: 81196 kB' 'KernelStack: 6420 kB' 'PageTables: 3936 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335976 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55016 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.415 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.416 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6823740 kB' 'MemAvailable: 9513744 kB' 'Buffers: 2436 kB' 'Cached: 2894140 kB' 'SwapCached: 0 kB' 'Active: 458840 kB' 'Inactive: 2554076 kB' 'Active(anon): 126816 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2554076 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 117760 kB' 'Mapped: 48124 kB' 'Shmem: 10476 kB' 'KReclaimable: 81688 kB' 'Slab: 162880 kB' 'SReclaimable: 81688 kB' 'SUnreclaim: 81192 kB' 'KernelStack: 6404 kB' 'PageTables: 3888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335976 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55016 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.417 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.418 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:06:02.419 nr_hugepages=1024 00:06:02.419 resv_hugepages=0 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:02.419 surplus_hugepages=0 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:02.419 anon_hugepages=0 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6823488 kB' 'MemAvailable: 9513492 kB' 'Buffers: 2436 kB' 'Cached: 2894140 kB' 'SwapCached: 0 kB' 'Active: 458632 kB' 'Inactive: 2554076 kB' 'Active(anon): 126608 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2554076 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 117704 kB' 'Mapped: 47984 kB' 'Shmem: 10476 kB' 'KReclaimable: 81688 kB' 'Slab: 162840 kB' 'SReclaimable: 81688 kB' 'SUnreclaim: 81152 kB' 'KernelStack: 6368 kB' 'PageTables: 3644 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335976 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55016 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.419 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.420 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.421 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.421 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.421 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.421 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.421 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.421 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.421 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.421 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.421 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.421 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.421 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.421 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.421 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.421 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.421 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.421 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.421 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.421 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.421 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.421 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.421 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.421 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.421 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.421 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:06:02.421 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:02.421 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:02.421 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:06:02.421 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:06:02.421 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:02.421 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:06:02.421 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:06:02.421 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:02.421 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:02.421 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:02.421 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:02.421 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:02.421 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:06:02.421 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:02.421 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:02.421 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:02.421 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:02.421 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:02.421 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:02.421 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:02.421 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.421 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.421 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6823488 kB' 'MemUsed: 5418488 kB' 'SwapCached: 0 kB' 'Active: 458704 kB' 'Inactive: 2554076 kB' 'Active(anon): 126680 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2554076 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'FilePages: 2896576 kB' 'Mapped: 47984 kB' 'AnonPages: 117820 kB' 'Shmem: 10476 kB' 'KernelStack: 6384 kB' 'PageTables: 3692 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 81688 kB' 'Slab: 162836 kB' 'SReclaimable: 81688 kB' 'SUnreclaim: 81148 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:06:02.421 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.421 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.421 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.421 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.421 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.682 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.682 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.682 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.682 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.682 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.682 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.682 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.682 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.682 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.682 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.682 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.682 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.682 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.682 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.682 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.682 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.682 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.682 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.682 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.682 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.682 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.682 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.682 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.682 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.682 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.682 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.682 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.682 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.682 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.682 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.682 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.682 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.682 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.682 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.682 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.682 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.682 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.682 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.682 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.682 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.682 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.682 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.682 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.682 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.682 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.682 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.682 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.682 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.682 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.682 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.682 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.682 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.682 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.682 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.682 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.682 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.682 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.682 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.682 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.682 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.682 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.682 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.682 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.682 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.682 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.682 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.682 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.682 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.682 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.682 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.682 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.682 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.683 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.683 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.683 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.683 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.683 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.683 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.683 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.683 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.683 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.683 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.683 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.683 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.683 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.683 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.683 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.683 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.683 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.683 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.683 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.683 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.683 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.683 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.683 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.683 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.683 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.683 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.683 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.683 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.683 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.683 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.683 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.683 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.683 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.683 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.683 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.683 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.683 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.683 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.683 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.683 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.683 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.683 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.683 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.683 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.683 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.683 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.683 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.683 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.683 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.683 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.683 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.683 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.683 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.683 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.683 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.683 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.683 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.683 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.683 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.683 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.683 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.683 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.683 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.683 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.683 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:02.683 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.683 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.683 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.683 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:02.683 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:02.683 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:02.683 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:02.683 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:02.683 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:02.683 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:06:02.683 node0=1024 expecting 1024 00:06:02.683 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:06:02.683 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:06:02.683 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:06:02.683 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:06:02.683 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:02.683 18:24:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:02.943 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:03.203 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:03.203 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:03.203 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:03.203 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:03.203 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:06:03.203 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:06:03.203 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:06:03.203 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:06:03.203 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:06:03.203 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:06:03.203 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:06:03.203 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:06:03.203 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:03.203 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:03.203 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:03.203 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:03.203 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:03.203 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:03.203 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:03.203 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:03.203 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:03.203 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:03.203 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:03.203 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.203 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6827364 kB' 'MemAvailable: 9517372 kB' 'Buffers: 2436 kB' 'Cached: 2894144 kB' 'SwapCached: 0 kB' 'Active: 459036 kB' 'Inactive: 2554080 kB' 'Active(anon): 127012 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2554080 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 118152 kB' 'Mapped: 48156 kB' 'Shmem: 10476 kB' 'KReclaimable: 81688 kB' 'Slab: 162744 kB' 'SReclaimable: 81688 kB' 'SUnreclaim: 81056 kB' 'KernelStack: 6384 kB' 'PageTables: 3740 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335976 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55032 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:06:03.203 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.203 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.203 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.203 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.203 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.203 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.203 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.203 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.203 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.203 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.203 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.203 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.204 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.466 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.466 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.466 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.466 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.466 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.466 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.466 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.466 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.466 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6827616 kB' 'MemAvailable: 9517628 kB' 'Buffers: 2436 kB' 'Cached: 2894148 kB' 'SwapCached: 0 kB' 'Active: 458820 kB' 'Inactive: 2554084 kB' 'Active(anon): 126796 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2554084 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 117920 kB' 'Mapped: 48096 kB' 'Shmem: 10476 kB' 'KReclaimable: 81688 kB' 'Slab: 162752 kB' 'SReclaimable: 81688 kB' 'SUnreclaim: 81064 kB' 'KernelStack: 6384 kB' 'PageTables: 3724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335976 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55000 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.467 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.468 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6827616 kB' 'MemAvailable: 9517628 kB' 'Buffers: 2436 kB' 'Cached: 2894148 kB' 'SwapCached: 0 kB' 'Active: 458480 kB' 'Inactive: 2554084 kB' 'Active(anon): 126456 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2554084 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 117584 kB' 'Mapped: 47984 kB' 'Shmem: 10476 kB' 'KReclaimable: 81688 kB' 'Slab: 162648 kB' 'SReclaimable: 81688 kB' 'SUnreclaim: 80960 kB' 'KernelStack: 6384 kB' 'PageTables: 3700 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335976 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54984 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.469 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:06:03.470 nr_hugepages=1024 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:06:03.470 resv_hugepages=0 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:03.470 surplus_hugepages=0 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:03.470 anon_hugepages=0 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:03.470 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6827616 kB' 'MemAvailable: 9517628 kB' 'Buffers: 2436 kB' 'Cached: 2894148 kB' 'SwapCached: 0 kB' 'Active: 458480 kB' 'Inactive: 2554084 kB' 'Active(anon): 126456 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2554084 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 117880 kB' 'Mapped: 47984 kB' 'Shmem: 10476 kB' 'KReclaimable: 81688 kB' 'Slab: 162648 kB' 'SReclaimable: 81688 kB' 'SUnreclaim: 80960 kB' 'KernelStack: 6400 kB' 'PageTables: 3748 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335976 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54984 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.471 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6827616 kB' 'MemUsed: 5414360 kB' 'SwapCached: 0 kB' 'Active: 458668 kB' 'Inactive: 2554084 kB' 'Active(anon): 126644 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2554084 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'FilePages: 2896584 kB' 'Mapped: 47984 kB' 'AnonPages: 117804 kB' 'Shmem: 10476 kB' 'KernelStack: 6384 kB' 'PageTables: 3700 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 81688 kB' 'Slab: 162648 kB' 'SReclaimable: 81688 kB' 'SUnreclaim: 80960 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.472 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.473 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:03.474 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:03.474 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:03.474 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:03.474 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:03.474 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:03.474 node0=1024 expecting 1024 00:06:03.474 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:06:03.474 18:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:06:03.474 00:06:03.474 real 0m1.856s 00:06:03.474 user 0m0.824s 00:06:03.474 sys 0m1.157s 00:06:03.474 18:24:03 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:03.474 18:24:03 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:06:03.474 ************************************ 00:06:03.474 END TEST no_shrink_alloc 00:06:03.474 ************************************ 00:06:03.474 18:24:03 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:06:03.474 18:24:03 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:06:03.474 18:24:03 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:06:03.474 18:24:03 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:03.474 18:24:03 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:06:03.474 18:24:03 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:03.474 18:24:03 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:06:03.474 18:24:03 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:06:03.474 18:24:03 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:06:03.474 00:06:03.474 real 0m7.488s 00:06:03.474 user 0m3.078s 00:06:03.474 sys 0m4.731s 00:06:03.474 18:24:03 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:03.474 18:24:03 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:03.474 ************************************ 00:06:03.474 END TEST hugepages 00:06:03.474 ************************************ 00:06:03.474 18:24:03 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:06:03.474 18:24:03 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:03.474 18:24:03 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:03.474 18:24:03 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:03.474 ************************************ 00:06:03.474 START TEST driver 00:06:03.474 ************************************ 00:06:03.474 18:24:03 setup.sh.driver -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:06:03.733 * Looking for test storage... 00:06:03.733 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:06:03.733 18:24:03 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:06:03.733 18:24:03 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:03.733 18:24:03 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:10.302 18:24:09 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:06:10.302 18:24:09 setup.sh.driver -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:10.302 18:24:09 setup.sh.driver -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:10.302 18:24:09 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:06:10.302 ************************************ 00:06:10.302 START TEST guess_driver 00:06:10.302 ************************************ 00:06:10.302 18:24:09 setup.sh.driver.guess_driver -- common/autotest_common.sh@1121 -- # guess_driver 00:06:10.302 18:24:09 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:06:10.302 18:24:09 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:06:10.302 18:24:09 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:06:10.302 18:24:09 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:06:10.302 18:24:09 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:06:10.302 18:24:09 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:06:10.302 18:24:09 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:06:10.302 18:24:09 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:06:10.302 18:24:09 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:06:10.302 18:24:09 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:06:10.302 18:24:09 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:06:10.302 18:24:09 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:06:10.302 18:24:09 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:06:10.302 18:24:09 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:06:10.302 18:24:09 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:06:10.302 18:24:09 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:06:10.302 18:24:09 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:06:10.302 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:06:10.302 18:24:09 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:06:10.302 18:24:09 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:06:10.302 18:24:09 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:06:10.302 Looking for driver=uio_pci_generic 00:06:10.302 18:24:09 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:06:10.302 18:24:09 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:10.302 18:24:09 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:06:10.302 18:24:09 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:06:10.302 18:24:09 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:10.561 18:24:10 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:06:10.561 18:24:10 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:06:10.561 18:24:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:11.128 18:24:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:11.128 18:24:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:06:11.128 18:24:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:11.128 18:24:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:11.128 18:24:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:06:11.128 18:24:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:11.128 18:24:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:11.128 18:24:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:06:11.128 18:24:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:11.387 18:24:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:11.387 18:24:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:06:11.387 18:24:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:11.387 18:24:11 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:06:11.387 18:24:11 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:06:11.387 18:24:11 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:11.387 18:24:11 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:17.953 00:06:17.953 real 0m7.704s 00:06:17.953 user 0m0.921s 00:06:17.953 sys 0m1.961s 00:06:17.953 18:24:17 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:17.953 18:24:17 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:06:17.953 ************************************ 00:06:17.953 END TEST guess_driver 00:06:17.953 ************************************ 00:06:17.953 00:06:17.953 real 0m13.980s 00:06:17.953 user 0m1.329s 00:06:17.953 sys 0m3.025s 00:06:17.953 18:24:17 setup.sh.driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:17.953 18:24:17 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:06:17.953 ************************************ 00:06:17.953 END TEST driver 00:06:17.953 ************************************ 00:06:17.953 18:24:17 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:06:17.953 18:24:17 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:17.953 18:24:17 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:17.953 18:24:17 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:17.953 ************************************ 00:06:17.953 START TEST devices 00:06:17.953 ************************************ 00:06:17.953 18:24:17 setup.sh.devices -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:06:17.953 * Looking for test storage... 00:06:17.953 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:06:17.953 18:24:17 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:06:17.953 18:24:17 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:06:17.953 18:24:17 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:17.953 18:24:17 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:19.333 18:24:18 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:06:19.333 18:24:18 setup.sh.devices -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:06:19.333 18:24:18 setup.sh.devices -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:06:19.333 18:24:18 setup.sh.devices -- common/autotest_common.sh@1666 -- # local nvme bdf 00:06:19.333 18:24:18 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:06:19.333 18:24:18 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:06:19.333 18:24:18 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:06:19.334 18:24:18 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:19.334 18:24:18 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:06:19.334 18:24:18 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:06:19.334 18:24:18 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:06:19.334 18:24:18 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:06:19.334 18:24:18 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:19.334 18:24:18 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:06:19.334 18:24:18 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:06:19.334 18:24:18 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n1 00:06:19.334 18:24:18 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme2n1 00:06:19.334 18:24:19 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:06:19.334 18:24:19 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:06:19.334 18:24:19 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:06:19.334 18:24:19 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n2 00:06:19.334 18:24:19 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme2n2 00:06:19.334 18:24:19 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:06:19.334 18:24:19 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:06:19.334 18:24:19 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:06:19.334 18:24:19 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n3 00:06:19.334 18:24:19 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme2n3 00:06:19.334 18:24:19 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:06:19.334 18:24:19 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:06:19.334 18:24:19 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:06:19.334 18:24:19 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme3c3n1 00:06:19.334 18:24:19 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme3c3n1 00:06:19.334 18:24:19 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:06:19.334 18:24:19 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:06:19.334 18:24:19 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:06:19.334 18:24:19 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme3n1 00:06:19.334 18:24:19 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme3n1 00:06:19.334 18:24:19 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:06:19.334 18:24:19 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:06:19.334 18:24:19 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:06:19.334 18:24:19 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:06:19.334 18:24:19 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:06:19.334 18:24:19 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:06:19.334 18:24:19 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:06:19.334 18:24:19 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:06:19.334 18:24:19 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:06:19.334 18:24:19 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:06:19.334 18:24:19 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:06:19.334 18:24:19 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:06:19.334 18:24:19 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:06:19.334 18:24:19 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:06:19.334 18:24:19 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:06:19.334 No valid GPT data, bailing 00:06:19.334 18:24:19 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:19.334 18:24:19 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:06:19.334 18:24:19 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:06:19.334 18:24:19 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:06:19.334 18:24:19 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:19.334 18:24:19 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:19.334 18:24:19 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:06:19.334 18:24:19 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:06:19.334 18:24:19 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:06:19.334 18:24:19 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:06:19.334 18:24:19 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:06:19.334 18:24:19 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:06:19.334 18:24:19 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:06:19.334 18:24:19 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:06:19.334 18:24:19 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:06:19.334 18:24:19 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:06:19.334 18:24:19 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:06:19.334 18:24:19 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:06:19.334 No valid GPT data, bailing 00:06:19.334 18:24:19 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:06:19.334 18:24:19 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:06:19.334 18:24:19 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:06:19.334 18:24:19 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:06:19.334 18:24:19 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:06:19.334 18:24:19 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:06:19.334 18:24:19 setup.sh.devices -- setup/common.sh@80 -- # echo 6343335936 00:06:19.334 18:24:19 setup.sh.devices -- setup/devices.sh@204 -- # (( 6343335936 >= min_disk_size )) 00:06:19.334 18:24:19 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:06:19.334 18:24:19 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:06:19.334 18:24:19 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:06:19.334 18:24:19 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n1 00:06:19.334 18:24:19 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:06:19.334 18:24:19 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:06:19.334 18:24:19 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:06:19.334 18:24:19 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n1 00:06:19.334 18:24:19 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n1 pt 00:06:19.334 18:24:19 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n1 00:06:19.334 No valid GPT data, bailing 00:06:19.334 18:24:19 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:06:19.334 18:24:19 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:06:19.334 18:24:19 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:06:19.334 18:24:19 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n1 00:06:19.334 18:24:19 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n1 00:06:19.334 18:24:19 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n1 ]] 00:06:19.334 18:24:19 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:06:19.334 18:24:19 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:06:19.334 18:24:19 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:06:19.334 18:24:19 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:06:19.334 18:24:19 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:06:19.334 18:24:19 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n2 00:06:19.334 18:24:19 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:06:19.334 18:24:19 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:06:19.334 18:24:19 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:06:19.334 18:24:19 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n2 00:06:19.334 18:24:19 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n2 pt 00:06:19.334 18:24:19 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n2 00:06:19.334 No valid GPT data, bailing 00:06:19.334 18:24:19 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:06:19.334 18:24:19 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:06:19.334 18:24:19 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:06:19.334 18:24:19 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n2 00:06:19.334 18:24:19 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n2 00:06:19.334 18:24:19 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n2 ]] 00:06:19.334 18:24:19 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:06:19.334 18:24:19 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:06:19.334 18:24:19 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:06:19.334 18:24:19 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:06:19.335 18:24:19 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:06:19.335 18:24:19 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n3 00:06:19.335 18:24:19 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:06:19.335 18:24:19 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:06:19.335 18:24:19 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:06:19.335 18:24:19 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n3 00:06:19.335 18:24:19 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n3 pt 00:06:19.335 18:24:19 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n3 00:06:19.335 No valid GPT data, bailing 00:06:19.335 18:24:19 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:06:19.335 18:24:19 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:06:19.335 18:24:19 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:06:19.335 18:24:19 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n3 00:06:19.335 18:24:19 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n3 00:06:19.335 18:24:19 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n3 ]] 00:06:19.335 18:24:19 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:06:19.335 18:24:19 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:06:19.335 18:24:19 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:06:19.335 18:24:19 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:06:19.335 18:24:19 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:06:19.335 18:24:19 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme3n1 00:06:19.335 18:24:19 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme3 00:06:19.335 18:24:19 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:13.0 00:06:19.335 18:24:19 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\3\.\0* ]] 00:06:19.335 18:24:19 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme3n1 00:06:19.335 18:24:19 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme3n1 pt 00:06:19.335 18:24:19 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme3n1 00:06:19.594 No valid GPT data, bailing 00:06:19.594 18:24:19 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:06:19.594 18:24:19 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:06:19.594 18:24:19 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:06:19.594 18:24:19 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme3n1 00:06:19.594 18:24:19 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme3n1 00:06:19.594 18:24:19 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme3n1 ]] 00:06:19.594 18:24:19 setup.sh.devices -- setup/common.sh@80 -- # echo 1073741824 00:06:19.594 18:24:19 setup.sh.devices -- setup/devices.sh@204 -- # (( 1073741824 >= min_disk_size )) 00:06:19.594 18:24:19 setup.sh.devices -- setup/devices.sh@209 -- # (( 5 > 0 )) 00:06:19.594 18:24:19 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:06:19.594 18:24:19 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:06:19.594 18:24:19 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:19.594 18:24:19 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:19.594 18:24:19 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:06:19.594 ************************************ 00:06:19.594 START TEST nvme_mount 00:06:19.594 ************************************ 00:06:19.594 18:24:19 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1121 -- # nvme_mount 00:06:19.594 18:24:19 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:06:19.594 18:24:19 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:06:19.594 18:24:19 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:19.594 18:24:19 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:19.594 18:24:19 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:06:19.594 18:24:19 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:06:19.594 18:24:19 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:06:19.594 18:24:19 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:06:19.594 18:24:19 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:06:19.594 18:24:19 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:06:19.594 18:24:19 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:06:19.594 18:24:19 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:06:19.594 18:24:19 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:19.594 18:24:19 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:19.594 18:24:19 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:06:19.594 18:24:19 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:19.594 18:24:19 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:06:19.594 18:24:19 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:06:19.594 18:24:19 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:06:20.570 Creating new GPT entries in memory. 00:06:20.570 GPT data structures destroyed! You may now partition the disk using fdisk or 00:06:20.570 other utilities. 00:06:20.570 18:24:20 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:06:20.570 18:24:20 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:20.570 18:24:20 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:20.570 18:24:20 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:20.570 18:24:20 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:06:21.505 Creating new GPT entries in memory. 00:06:21.505 The operation has completed successfully. 00:06:21.505 18:24:21 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:06:21.505 18:24:21 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:21.505 18:24:21 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 71700 00:06:21.763 18:24:21 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:21.763 18:24:21 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:06:21.763 18:24:21 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:21.763 18:24:21 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:06:21.763 18:24:21 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:06:21.763 18:24:21 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:21.763 18:24:21 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:21.763 18:24:21 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:06:21.763 18:24:21 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:06:21.763 18:24:21 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:21.763 18:24:21 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:21.763 18:24:21 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:06:21.763 18:24:21 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:21.763 18:24:21 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:06:21.763 18:24:21 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:06:21.763 18:24:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:21.763 18:24:21 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:06:21.763 18:24:21 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:06:21.763 18:24:21 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:21.763 18:24:21 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:22.021 18:24:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:22.021 18:24:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:06:22.021 18:24:21 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:06:22.021 18:24:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:22.021 18:24:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:22.021 18:24:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:22.279 18:24:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:22.279 18:24:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:22.279 18:24:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:22.279 18:24:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:22.279 18:24:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:22.279 18:24:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:22.538 18:24:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:22.538 18:24:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:22.798 18:24:22 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:22.798 18:24:22 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:06:22.798 18:24:22 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:22.798 18:24:22 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:22.798 18:24:22 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:22.798 18:24:22 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:06:22.798 18:24:22 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:22.798 18:24:22 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:22.798 18:24:22 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:22.798 18:24:22 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:06:22.798 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:22.798 18:24:22 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:22.798 18:24:22 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:23.058 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:06:23.058 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:06:23.058 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:06:23.058 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:06:23.058 18:24:23 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:06:23.058 18:24:23 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:06:23.058 18:24:23 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:23.058 18:24:23 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:06:23.058 18:24:23 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:06:23.058 18:24:23 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:23.058 18:24:23 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:23.058 18:24:23 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:06:23.058 18:24:23 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:06:23.058 18:24:23 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:23.058 18:24:23 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:23.058 18:24:23 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:06:23.058 18:24:23 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:23.058 18:24:23 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:06:23.058 18:24:23 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:06:23.058 18:24:23 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:06:23.058 18:24:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:23.058 18:24:23 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:06:23.058 18:24:23 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:23.058 18:24:23 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:23.629 18:24:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:23.629 18:24:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:06:23.629 18:24:23 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:06:23.629 18:24:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:23.629 18:24:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:23.629 18:24:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:23.629 18:24:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:23.629 18:24:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:23.888 18:24:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:23.888 18:24:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:23.888 18:24:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:23.888 18:24:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:24.148 18:24:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:24.148 18:24:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:24.409 18:24:24 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:24.409 18:24:24 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:06:24.409 18:24:24 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:24.409 18:24:24 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:24.409 18:24:24 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:24.409 18:24:24 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:24.409 18:24:24 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:06:24.409 18:24:24 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:06:24.409 18:24:24 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:06:24.409 18:24:24 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:06:24.409 18:24:24 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:06:24.409 18:24:24 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:06:24.409 18:24:24 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:06:24.409 18:24:24 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:06:24.409 18:24:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:24.409 18:24:24 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:06:24.409 18:24:24 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:06:24.409 18:24:24 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:24.409 18:24:24 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:24.669 18:24:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:24.669 18:24:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:06:24.669 18:24:24 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:06:24.669 18:24:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:24.669 18:24:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:24.669 18:24:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:24.929 18:24:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:24.929 18:24:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:25.189 18:24:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:25.189 18:24:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:25.189 18:24:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:25.189 18:24:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:25.448 18:24:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:25.448 18:24:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:25.709 18:24:25 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:25.709 18:24:25 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:06:25.709 18:24:25 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:06:25.709 18:24:25 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:06:25.709 18:24:25 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:25.709 18:24:25 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:25.709 18:24:25 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:25.709 18:24:25 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:25.709 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:25.709 ************************************ 00:06:25.709 END TEST nvme_mount 00:06:25.709 ************************************ 00:06:25.709 00:06:25.709 real 0m6.155s 00:06:25.709 user 0m1.547s 00:06:25.709 sys 0m2.277s 00:06:25.709 18:24:25 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:25.709 18:24:25 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:06:25.709 18:24:25 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:06:25.709 18:24:25 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:25.709 18:24:25 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:25.709 18:24:25 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:06:25.709 ************************************ 00:06:25.709 START TEST dm_mount 00:06:25.709 ************************************ 00:06:25.709 18:24:25 setup.sh.devices.dm_mount -- common/autotest_common.sh@1121 -- # dm_mount 00:06:25.709 18:24:25 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:06:25.709 18:24:25 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:06:25.709 18:24:25 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:06:25.710 18:24:25 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:06:25.710 18:24:25 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:06:25.710 18:24:25 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:06:25.710 18:24:25 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:06:25.710 18:24:25 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:06:25.710 18:24:25 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:06:25.710 18:24:25 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:06:25.710 18:24:25 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:06:25.710 18:24:25 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:25.710 18:24:25 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:25.710 18:24:25 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:06:25.710 18:24:25 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:25.710 18:24:25 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:25.710 18:24:25 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:06:25.710 18:24:25 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:25.710 18:24:25 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:06:25.710 18:24:25 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:06:25.710 18:24:25 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:06:26.686 Creating new GPT entries in memory. 00:06:26.686 GPT data structures destroyed! You may now partition the disk using fdisk or 00:06:26.686 other utilities. 00:06:26.947 18:24:26 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:06:26.947 18:24:26 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:26.947 18:24:26 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:26.947 18:24:26 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:26.947 18:24:26 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:06:27.885 Creating new GPT entries in memory. 00:06:27.885 The operation has completed successfully. 00:06:27.885 18:24:27 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:06:27.885 18:24:27 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:27.885 18:24:27 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:27.885 18:24:27 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:27.885 18:24:27 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:06:28.823 The operation has completed successfully. 00:06:28.823 18:24:28 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:06:28.823 18:24:28 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:28.823 18:24:28 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 72336 00:06:28.823 18:24:28 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:06:28.823 18:24:28 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:28.823 18:24:28 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:06:28.823 18:24:28 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:06:29.084 18:24:28 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:06:29.084 18:24:28 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:29.084 18:24:28 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:06:29.084 18:24:28 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:29.084 18:24:28 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:06:29.084 18:24:28 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:06:29.084 18:24:28 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:06:29.084 18:24:28 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:06:29.084 18:24:28 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:06:29.084 18:24:28 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:29.084 18:24:28 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:06:29.084 18:24:28 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:29.084 18:24:28 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:29.084 18:24:28 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:06:29.084 18:24:28 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:29.084 18:24:28 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:06:29.084 18:24:28 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:06:29.084 18:24:28 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:06:29.084 18:24:28 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:29.084 18:24:28 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:06:29.084 18:24:28 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:06:29.084 18:24:28 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:06:29.084 18:24:28 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:06:29.084 18:24:28 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:06:29.084 18:24:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:29.084 18:24:28 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:06:29.084 18:24:28 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:06:29.084 18:24:28 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:29.084 18:24:28 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:29.344 18:24:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:29.344 18:24:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:06:29.344 18:24:29 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:06:29.344 18:24:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:29.344 18:24:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:29.344 18:24:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:29.344 18:24:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:29.344 18:24:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:29.604 18:24:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:29.604 18:24:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:29.604 18:24:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:29.604 18:24:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:29.863 18:24:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:29.863 18:24:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:30.122 18:24:30 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:30.122 18:24:30 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:06:30.122 18:24:30 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:30.122 18:24:30 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:06:30.122 18:24:30 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:06:30.122 18:24:30 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:30.122 18:24:30 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:06:30.122 18:24:30 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:06:30.122 18:24:30 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:06:30.122 18:24:30 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:06:30.122 18:24:30 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:06:30.122 18:24:30 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:06:30.122 18:24:30 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:06:30.122 18:24:30 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:06:30.122 18:24:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:30.122 18:24:30 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:06:30.122 18:24:30 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:06:30.122 18:24:30 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:30.122 18:24:30 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:30.381 18:24:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:30.381 18:24:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:06:30.381 18:24:30 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:06:30.381 18:24:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:30.381 18:24:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:30.381 18:24:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:30.641 18:24:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:30.641 18:24:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:30.641 18:24:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:30.641 18:24:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:30.641 18:24:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:30.641 18:24:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:30.900 18:24:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:30.900 18:24:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:31.160 18:24:31 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:31.160 18:24:31 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:06:31.160 18:24:31 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:06:31.160 18:24:31 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:06:31.160 18:24:31 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:31.160 18:24:31 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:06:31.160 18:24:31 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:06:31.160 18:24:31 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:31.160 18:24:31 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:06:31.160 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:31.160 18:24:31 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:06:31.160 18:24:31 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:06:31.160 00:06:31.160 real 0m5.466s 00:06:31.160 user 0m0.977s 00:06:31.160 sys 0m1.384s 00:06:31.160 18:24:31 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:31.160 18:24:31 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:06:31.160 ************************************ 00:06:31.160 END TEST dm_mount 00:06:31.160 ************************************ 00:06:31.160 18:24:31 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:06:31.160 18:24:31 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:06:31.160 18:24:31 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:31.419 18:24:31 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:31.419 18:24:31 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:06:31.419 18:24:31 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:31.419 18:24:31 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:31.678 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:06:31.678 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:06:31.678 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:06:31.678 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:06:31.678 18:24:31 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:06:31.678 18:24:31 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:31.678 18:24:31 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:06:31.678 18:24:31 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:31.678 18:24:31 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:06:31.678 18:24:31 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:06:31.678 18:24:31 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:06:31.678 ************************************ 00:06:31.678 END TEST devices 00:06:31.678 ************************************ 00:06:31.678 00:06:31.678 real 0m13.971s 00:06:31.678 user 0m3.460s 00:06:31.678 sys 0m4.796s 00:06:31.678 18:24:31 setup.sh.devices -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:31.678 18:24:31 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:06:31.678 00:06:31.678 real 0m49.650s 00:06:31.678 user 0m11.302s 00:06:31.678 sys 0m18.483s 00:06:31.678 18:24:31 setup.sh -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:31.678 18:24:31 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:31.678 ************************************ 00:06:31.678 END TEST setup.sh 00:06:31.678 ************************************ 00:06:31.678 18:24:31 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:32.247 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:32.814 Hugepages 00:06:32.815 node hugesize free / total 00:06:32.815 node0 1048576kB 0 / 0 00:06:32.815 node0 2048kB 2048 / 2048 00:06:32.815 00:06:32.815 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:32.815 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:06:33.073 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:06:33.073 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:06:33.073 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:06:33.331 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:06:33.331 18:24:33 -- spdk/autotest.sh@130 -- # uname -s 00:06:33.331 18:24:33 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:06:33.331 18:24:33 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:06:33.331 18:24:33 -- common/autotest_common.sh@1527 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:33.898 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:34.466 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:34.466 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:34.466 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:34.466 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:34.726 18:24:34 -- common/autotest_common.sh@1528 -- # sleep 1 00:06:35.679 18:24:35 -- common/autotest_common.sh@1529 -- # bdfs=() 00:06:35.679 18:24:35 -- common/autotest_common.sh@1529 -- # local bdfs 00:06:35.679 18:24:35 -- common/autotest_common.sh@1530 -- # bdfs=($(get_nvme_bdfs)) 00:06:35.679 18:24:35 -- common/autotest_common.sh@1530 -- # get_nvme_bdfs 00:06:35.679 18:24:35 -- common/autotest_common.sh@1509 -- # bdfs=() 00:06:35.679 18:24:35 -- common/autotest_common.sh@1509 -- # local bdfs 00:06:35.679 18:24:35 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:35.679 18:24:35 -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:35.679 18:24:35 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:06:35.679 18:24:35 -- common/autotest_common.sh@1511 -- # (( 4 == 0 )) 00:06:35.679 18:24:35 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:06:35.679 18:24:35 -- common/autotest_common.sh@1532 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:36.264 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:36.524 Waiting for block devices as requested 00:06:36.524 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:36.524 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:36.784 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:06:36.784 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:06:42.133 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:06:42.133 18:24:41 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:06:42.133 18:24:41 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:06:42.133 18:24:41 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:42.133 18:24:41 -- common/autotest_common.sh@1498 -- # grep 0000:00:10.0/nvme/nvme 00:06:42.133 18:24:41 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:42.133 18:24:41 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:06:42.133 18:24:41 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:42.133 18:24:41 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme1 00:06:42.133 18:24:41 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme1 00:06:42.133 18:24:41 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme1 ]] 00:06:42.133 18:24:41 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme1 00:06:42.133 18:24:41 -- common/autotest_common.sh@1541 -- # grep oacs 00:06:42.133 18:24:41 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:06:42.134 18:24:41 -- common/autotest_common.sh@1541 -- # oacs=' 0x12a' 00:06:42.134 18:24:41 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:06:42.134 18:24:41 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:06:42.134 18:24:41 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme1 00:06:42.134 18:24:41 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:06:42.134 18:24:41 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:06:42.134 18:24:41 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:06:42.134 18:24:41 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:06:42.134 18:24:41 -- common/autotest_common.sh@1553 -- # continue 00:06:42.134 18:24:41 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:06:42.134 18:24:41 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:06:42.134 18:24:41 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:42.134 18:24:41 -- common/autotest_common.sh@1498 -- # grep 0000:00:11.0/nvme/nvme 00:06:42.134 18:24:41 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:42.134 18:24:41 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:06:42.134 18:24:41 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:42.134 18:24:41 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme0 00:06:42.134 18:24:41 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme0 00:06:42.134 18:24:41 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme0 ]] 00:06:42.134 18:24:41 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme0 00:06:42.134 18:24:41 -- common/autotest_common.sh@1541 -- # grep oacs 00:06:42.134 18:24:41 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:06:42.134 18:24:41 -- common/autotest_common.sh@1541 -- # oacs=' 0x12a' 00:06:42.134 18:24:41 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:06:42.134 18:24:41 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:06:42.134 18:24:41 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme0 00:06:42.134 18:24:41 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:06:42.134 18:24:41 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:06:42.134 18:24:41 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:06:42.134 18:24:41 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:06:42.134 18:24:41 -- common/autotest_common.sh@1553 -- # continue 00:06:42.134 18:24:41 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:06:42.134 18:24:41 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:06:42.134 18:24:41 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:42.134 18:24:41 -- common/autotest_common.sh@1498 -- # grep 0000:00:12.0/nvme/nvme 00:06:42.134 18:24:41 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:06:42.134 18:24:41 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:06:42.134 18:24:41 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:06:42.134 18:24:41 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme2 00:06:42.134 18:24:41 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme2 00:06:42.134 18:24:41 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme2 ]] 00:06:42.134 18:24:41 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme2 00:06:42.134 18:24:41 -- common/autotest_common.sh@1541 -- # grep oacs 00:06:42.134 18:24:41 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:06:42.134 18:24:41 -- common/autotest_common.sh@1541 -- # oacs=' 0x12a' 00:06:42.134 18:24:41 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:06:42.134 18:24:41 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:06:42.134 18:24:41 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme2 00:06:42.134 18:24:41 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:06:42.134 18:24:41 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:06:42.134 18:24:41 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:06:42.134 18:24:41 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:06:42.134 18:24:41 -- common/autotest_common.sh@1553 -- # continue 00:06:42.134 18:24:41 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:06:42.134 18:24:41 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:06:42.134 18:24:41 -- common/autotest_common.sh@1498 -- # grep 0000:00:13.0/nvme/nvme 00:06:42.134 18:24:41 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:42.134 18:24:41 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:06:42.134 18:24:41 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:06:42.134 18:24:41 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:06:42.134 18:24:41 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme3 00:06:42.134 18:24:41 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme3 00:06:42.134 18:24:41 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme3 ]] 00:06:42.134 18:24:41 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme3 00:06:42.134 18:24:41 -- common/autotest_common.sh@1541 -- # grep oacs 00:06:42.134 18:24:41 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:06:42.134 18:24:41 -- common/autotest_common.sh@1541 -- # oacs=' 0x12a' 00:06:42.134 18:24:41 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:06:42.134 18:24:41 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:06:42.134 18:24:41 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme3 00:06:42.134 18:24:41 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:06:42.134 18:24:41 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:06:42.134 18:24:41 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:06:42.134 18:24:41 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:06:42.134 18:24:41 -- common/autotest_common.sh@1553 -- # continue 00:06:42.134 18:24:41 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:06:42.134 18:24:41 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:42.134 18:24:41 -- common/autotest_common.sh@10 -- # set +x 00:06:42.134 18:24:42 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:06:42.134 18:24:42 -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:42.134 18:24:42 -- common/autotest_common.sh@10 -- # set +x 00:06:42.134 18:24:42 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:42.703 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:43.272 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:43.272 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:43.272 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:43.530 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:43.530 18:24:43 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:06:43.530 18:24:43 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:43.530 18:24:43 -- common/autotest_common.sh@10 -- # set +x 00:06:43.530 18:24:43 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:06:43.530 18:24:43 -- common/autotest_common.sh@1587 -- # mapfile -t bdfs 00:06:43.530 18:24:43 -- common/autotest_common.sh@1587 -- # get_nvme_bdfs_by_id 0x0a54 00:06:43.530 18:24:43 -- common/autotest_common.sh@1573 -- # bdfs=() 00:06:43.530 18:24:43 -- common/autotest_common.sh@1573 -- # local bdfs 00:06:43.530 18:24:43 -- common/autotest_common.sh@1575 -- # get_nvme_bdfs 00:06:43.530 18:24:43 -- common/autotest_common.sh@1509 -- # bdfs=() 00:06:43.530 18:24:43 -- common/autotest_common.sh@1509 -- # local bdfs 00:06:43.530 18:24:43 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:43.530 18:24:43 -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:43.530 18:24:43 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:06:43.789 18:24:43 -- common/autotest_common.sh@1511 -- # (( 4 == 0 )) 00:06:43.789 18:24:43 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:06:43.789 18:24:43 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:06:43.789 18:24:43 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:06:43.789 18:24:43 -- common/autotest_common.sh@1576 -- # device=0x0010 00:06:43.789 18:24:43 -- common/autotest_common.sh@1577 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:43.789 18:24:43 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:06:43.789 18:24:43 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:06:43.789 18:24:43 -- common/autotest_common.sh@1576 -- # device=0x0010 00:06:43.789 18:24:43 -- common/autotest_common.sh@1577 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:43.790 18:24:43 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:06:43.790 18:24:43 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:06:43.790 18:24:43 -- common/autotest_common.sh@1576 -- # device=0x0010 00:06:43.790 18:24:43 -- common/autotest_common.sh@1577 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:43.790 18:24:43 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:06:43.790 18:24:43 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:06:43.790 18:24:43 -- common/autotest_common.sh@1576 -- # device=0x0010 00:06:43.790 18:24:43 -- common/autotest_common.sh@1577 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:43.790 18:24:43 -- common/autotest_common.sh@1582 -- # printf '%s\n' 00:06:43.790 18:24:43 -- common/autotest_common.sh@1588 -- # [[ -z '' ]] 00:06:43.790 18:24:43 -- common/autotest_common.sh@1589 -- # return 0 00:06:43.790 18:24:43 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:06:43.790 18:24:43 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:06:43.790 18:24:43 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:06:43.790 18:24:43 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:06:43.790 18:24:43 -- spdk/autotest.sh@162 -- # timing_enter lib 00:06:43.790 18:24:43 -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:43.790 18:24:43 -- common/autotest_common.sh@10 -- # set +x 00:06:43.790 18:24:43 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:06:43.790 18:24:43 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:43.790 18:24:43 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:43.790 18:24:43 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:43.790 18:24:43 -- common/autotest_common.sh@10 -- # set +x 00:06:43.790 ************************************ 00:06:43.790 START TEST env 00:06:43.790 ************************************ 00:06:43.790 18:24:43 env -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:43.790 * Looking for test storage... 00:06:43.790 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:06:43.790 18:24:43 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:43.790 18:24:43 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:43.790 18:24:43 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:43.790 18:24:43 env -- common/autotest_common.sh@10 -- # set +x 00:06:43.790 ************************************ 00:06:43.790 START TEST env_memory 00:06:43.790 ************************************ 00:06:43.790 18:24:43 env.env_memory -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:44.050 00:06:44.050 00:06:44.050 CUnit - A unit testing framework for C - Version 2.1-3 00:06:44.050 http://cunit.sourceforge.net/ 00:06:44.050 00:06:44.050 00:06:44.050 Suite: memory 00:06:44.050 Test: alloc and free memory map ...[2024-07-23 18:24:43.898933] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:44.050 passed 00:06:44.050 Test: mem map translation ...[2024-07-23 18:24:43.937664] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:44.050 [2024-07-23 18:24:43.937705] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:44.050 [2024-07-23 18:24:43.937790] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:44.050 [2024-07-23 18:24:43.937809] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:44.050 passed 00:06:44.050 Test: mem map registration ...[2024-07-23 18:24:43.999353] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:06:44.050 [2024-07-23 18:24:43.999432] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:06:44.050 passed 00:06:44.050 Test: mem map adjacent registrations ...passed 00:06:44.050 00:06:44.050 Run Summary: Type Total Ran Passed Failed Inactive 00:06:44.050 suites 1 1 n/a 0 0 00:06:44.050 tests 4 4 4 0 0 00:06:44.050 asserts 152 152 152 0 n/a 00:06:44.050 00:06:44.050 Elapsed time = 0.225 seconds 00:06:44.309 00:06:44.309 real 0m0.271s 00:06:44.309 user 0m0.237s 00:06:44.309 sys 0m0.029s 00:06:44.309 18:24:44 env.env_memory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:44.309 18:24:44 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:44.309 ************************************ 00:06:44.309 END TEST env_memory 00:06:44.309 ************************************ 00:06:44.309 18:24:44 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:44.309 18:24:44 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:44.309 18:24:44 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:44.309 18:24:44 env -- common/autotest_common.sh@10 -- # set +x 00:06:44.309 ************************************ 00:06:44.309 START TEST env_vtophys 00:06:44.309 ************************************ 00:06:44.309 18:24:44 env.env_vtophys -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:44.309 EAL: lib.eal log level changed from notice to debug 00:06:44.309 EAL: Detected lcore 0 as core 0 on socket 0 00:06:44.309 EAL: Detected lcore 1 as core 0 on socket 0 00:06:44.309 EAL: Detected lcore 2 as core 0 on socket 0 00:06:44.309 EAL: Detected lcore 3 as core 0 on socket 0 00:06:44.309 EAL: Detected lcore 4 as core 0 on socket 0 00:06:44.309 EAL: Detected lcore 5 as core 0 on socket 0 00:06:44.309 EAL: Detected lcore 6 as core 0 on socket 0 00:06:44.309 EAL: Detected lcore 7 as core 0 on socket 0 00:06:44.309 EAL: Detected lcore 8 as core 0 on socket 0 00:06:44.309 EAL: Detected lcore 9 as core 0 on socket 0 00:06:44.309 EAL: Maximum logical cores by configuration: 128 00:06:44.309 EAL: Detected CPU lcores: 10 00:06:44.309 EAL: Detected NUMA nodes: 1 00:06:44.309 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:06:44.309 EAL: Detected shared linkage of DPDK 00:06:44.309 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:06:44.309 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:06:44.309 EAL: Registered [vdev] bus. 00:06:44.309 EAL: bus.vdev log level changed from disabled to notice 00:06:44.309 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:06:44.309 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:06:44.309 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:06:44.309 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:06:44.309 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:06:44.309 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:06:44.309 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:06:44.309 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:06:44.310 EAL: No shared files mode enabled, IPC will be disabled 00:06:44.310 EAL: No shared files mode enabled, IPC is disabled 00:06:44.310 EAL: Selected IOVA mode 'PA' 00:06:44.310 EAL: Probing VFIO support... 00:06:44.310 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:44.310 EAL: VFIO modules not loaded, skipping VFIO support... 00:06:44.310 EAL: Ask a virtual area of 0x2e000 bytes 00:06:44.310 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:44.310 EAL: Setting up physically contiguous memory... 00:06:44.310 EAL: Setting maximum number of open files to 524288 00:06:44.310 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:44.310 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:44.310 EAL: Ask a virtual area of 0x61000 bytes 00:06:44.310 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:44.310 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:44.310 EAL: Ask a virtual area of 0x400000000 bytes 00:06:44.310 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:44.310 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:44.310 EAL: Ask a virtual area of 0x61000 bytes 00:06:44.310 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:44.310 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:44.310 EAL: Ask a virtual area of 0x400000000 bytes 00:06:44.310 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:44.310 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:44.310 EAL: Ask a virtual area of 0x61000 bytes 00:06:44.310 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:44.310 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:44.310 EAL: Ask a virtual area of 0x400000000 bytes 00:06:44.310 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:44.310 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:44.310 EAL: Ask a virtual area of 0x61000 bytes 00:06:44.310 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:44.310 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:44.310 EAL: Ask a virtual area of 0x400000000 bytes 00:06:44.310 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:44.310 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:44.310 EAL: Hugepages will be freed exactly as allocated. 00:06:44.310 EAL: No shared files mode enabled, IPC is disabled 00:06:44.310 EAL: No shared files mode enabled, IPC is disabled 00:06:44.310 EAL: TSC frequency is ~2290000 KHz 00:06:44.310 EAL: Main lcore 0 is ready (tid=7ff1f7dc3a40;cpuset=[0]) 00:06:44.310 EAL: Trying to obtain current memory policy. 00:06:44.310 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:44.310 EAL: Restoring previous memory policy: 0 00:06:44.310 EAL: request: mp_malloc_sync 00:06:44.310 EAL: No shared files mode enabled, IPC is disabled 00:06:44.310 EAL: Heap on socket 0 was expanded by 2MB 00:06:44.310 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:44.310 EAL: No shared files mode enabled, IPC is disabled 00:06:44.310 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:44.310 EAL: Mem event callback 'spdk:(nil)' registered 00:06:44.310 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:06:44.310 00:06:44.310 00:06:44.310 CUnit - A unit testing framework for C - Version 2.1-3 00:06:44.310 http://cunit.sourceforge.net/ 00:06:44.310 00:06:44.310 00:06:44.310 Suite: components_suite 00:06:44.879 Test: vtophys_malloc_test ...passed 00:06:44.879 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:44.879 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:44.879 EAL: Restoring previous memory policy: 4 00:06:44.879 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.879 EAL: request: mp_malloc_sync 00:06:44.879 EAL: No shared files mode enabled, IPC is disabled 00:06:44.879 EAL: Heap on socket 0 was expanded by 4MB 00:06:44.879 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.879 EAL: request: mp_malloc_sync 00:06:44.879 EAL: No shared files mode enabled, IPC is disabled 00:06:44.879 EAL: Heap on socket 0 was shrunk by 4MB 00:06:44.879 EAL: Trying to obtain current memory policy. 00:06:44.879 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:44.879 EAL: Restoring previous memory policy: 4 00:06:44.879 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.879 EAL: request: mp_malloc_sync 00:06:44.879 EAL: No shared files mode enabled, IPC is disabled 00:06:44.879 EAL: Heap on socket 0 was expanded by 6MB 00:06:44.879 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.879 EAL: request: mp_malloc_sync 00:06:44.879 EAL: No shared files mode enabled, IPC is disabled 00:06:44.879 EAL: Heap on socket 0 was shrunk by 6MB 00:06:44.879 EAL: Trying to obtain current memory policy. 00:06:44.879 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:44.879 EAL: Restoring previous memory policy: 4 00:06:44.879 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.879 EAL: request: mp_malloc_sync 00:06:44.879 EAL: No shared files mode enabled, IPC is disabled 00:06:44.879 EAL: Heap on socket 0 was expanded by 10MB 00:06:44.879 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.879 EAL: request: mp_malloc_sync 00:06:44.879 EAL: No shared files mode enabled, IPC is disabled 00:06:44.879 EAL: Heap on socket 0 was shrunk by 10MB 00:06:44.879 EAL: Trying to obtain current memory policy. 00:06:44.879 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:44.879 EAL: Restoring previous memory policy: 4 00:06:44.879 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.879 EAL: request: mp_malloc_sync 00:06:44.879 EAL: No shared files mode enabled, IPC is disabled 00:06:44.879 EAL: Heap on socket 0 was expanded by 18MB 00:06:44.879 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.879 EAL: request: mp_malloc_sync 00:06:44.879 EAL: No shared files mode enabled, IPC is disabled 00:06:44.879 EAL: Heap on socket 0 was shrunk by 18MB 00:06:44.879 EAL: Trying to obtain current memory policy. 00:06:44.879 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:44.879 EAL: Restoring previous memory policy: 4 00:06:44.879 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.879 EAL: request: mp_malloc_sync 00:06:44.879 EAL: No shared files mode enabled, IPC is disabled 00:06:44.879 EAL: Heap on socket 0 was expanded by 34MB 00:06:44.879 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.879 EAL: request: mp_malloc_sync 00:06:44.879 EAL: No shared files mode enabled, IPC is disabled 00:06:44.879 EAL: Heap on socket 0 was shrunk by 34MB 00:06:44.879 EAL: Trying to obtain current memory policy. 00:06:44.879 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:44.879 EAL: Restoring previous memory policy: 4 00:06:44.879 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.879 EAL: request: mp_malloc_sync 00:06:44.879 EAL: No shared files mode enabled, IPC is disabled 00:06:44.879 EAL: Heap on socket 0 was expanded by 66MB 00:06:44.879 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.879 EAL: request: mp_malloc_sync 00:06:44.879 EAL: No shared files mode enabled, IPC is disabled 00:06:44.879 EAL: Heap on socket 0 was shrunk by 66MB 00:06:44.879 EAL: Trying to obtain current memory policy. 00:06:44.879 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:44.879 EAL: Restoring previous memory policy: 4 00:06:44.879 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.879 EAL: request: mp_malloc_sync 00:06:44.879 EAL: No shared files mode enabled, IPC is disabled 00:06:44.879 EAL: Heap on socket 0 was expanded by 130MB 00:06:44.879 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.879 EAL: request: mp_malloc_sync 00:06:44.879 EAL: No shared files mode enabled, IPC is disabled 00:06:44.879 EAL: Heap on socket 0 was shrunk by 130MB 00:06:44.879 EAL: Trying to obtain current memory policy. 00:06:44.879 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:44.879 EAL: Restoring previous memory policy: 4 00:06:44.879 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.879 EAL: request: mp_malloc_sync 00:06:44.879 EAL: No shared files mode enabled, IPC is disabled 00:06:44.879 EAL: Heap on socket 0 was expanded by 258MB 00:06:44.879 EAL: Calling mem event callback 'spdk:(nil)' 00:06:45.139 EAL: request: mp_malloc_sync 00:06:45.139 EAL: No shared files mode enabled, IPC is disabled 00:06:45.139 EAL: Heap on socket 0 was shrunk by 258MB 00:06:45.139 EAL: Trying to obtain current memory policy. 00:06:45.139 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:45.139 EAL: Restoring previous memory policy: 4 00:06:45.139 EAL: Calling mem event callback 'spdk:(nil)' 00:06:45.139 EAL: request: mp_malloc_sync 00:06:45.139 EAL: No shared files mode enabled, IPC is disabled 00:06:45.139 EAL: Heap on socket 0 was expanded by 514MB 00:06:45.139 EAL: Calling mem event callback 'spdk:(nil)' 00:06:45.401 EAL: request: mp_malloc_sync 00:06:45.401 EAL: No shared files mode enabled, IPC is disabled 00:06:45.401 EAL: Heap on socket 0 was shrunk by 514MB 00:06:45.401 EAL: Trying to obtain current memory policy. 00:06:45.401 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:45.401 EAL: Restoring previous memory policy: 4 00:06:45.401 EAL: Calling mem event callback 'spdk:(nil)' 00:06:45.401 EAL: request: mp_malloc_sync 00:06:45.401 EAL: No shared files mode enabled, IPC is disabled 00:06:45.401 EAL: Heap on socket 0 was expanded by 1026MB 00:06:45.660 EAL: Calling mem event callback 'spdk:(nil)' 00:06:45.919 passed 00:06:45.919 00:06:45.919 Run Summary: Type Total Ran Passed Failed Inactive 00:06:45.919 suites 1 1 n/a 0 0 00:06:45.919 tests 2 2 2 0 0 00:06:45.919 asserts 5365 5365 5365 0 n/a 00:06:45.919 00:06:45.919 Elapsed time = 1.373 seconds 00:06:45.919 EAL: request: mp_malloc_sync 00:06:45.919 EAL: No shared files mode enabled, IPC is disabled 00:06:45.919 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:45.919 EAL: Calling mem event callback 'spdk:(nil)' 00:06:45.919 EAL: request: mp_malloc_sync 00:06:45.919 EAL: No shared files mode enabled, IPC is disabled 00:06:45.919 EAL: Heap on socket 0 was shrunk by 2MB 00:06:45.919 EAL: No shared files mode enabled, IPC is disabled 00:06:45.919 EAL: No shared files mode enabled, IPC is disabled 00:06:45.919 EAL: No shared files mode enabled, IPC is disabled 00:06:45.919 00:06:45.919 real 0m1.623s 00:06:45.919 user 0m0.775s 00:06:45.919 sys 0m0.713s 00:06:45.919 18:24:45 env.env_vtophys -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:45.919 18:24:45 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:45.920 ************************************ 00:06:45.920 END TEST env_vtophys 00:06:45.920 ************************************ 00:06:45.920 18:24:45 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:45.920 18:24:45 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:45.920 18:24:45 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:45.920 18:24:45 env -- common/autotest_common.sh@10 -- # set +x 00:06:45.920 ************************************ 00:06:45.920 START TEST env_pci 00:06:45.920 ************************************ 00:06:45.920 18:24:45 env.env_pci -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:45.920 00:06:45.920 00:06:45.920 CUnit - A unit testing framework for C - Version 2.1-3 00:06:45.920 http://cunit.sourceforge.net/ 00:06:45.920 00:06:45.920 00:06:45.920 Suite: pci 00:06:45.920 Test: pci_hook ...[2024-07-23 18:24:45.874869] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 74114 has claimed it 00:06:45.920 passed 00:06:45.920 00:06:45.920 Run Summary: Type Total Ran Passed Failed Inactive 00:06:45.920 suites 1 1 n/a 0 0 00:06:45.920 tests 1 1 1 0 0 00:06:45.920 asserts 25 25 25 0 n/a 00:06:45.920 00:06:45.920 Elapsed time = 0.007 seconds 00:06:45.920 EAL: Cannot find device (10000:00:01.0) 00:06:45.920 EAL: Failed to attach device on primary process 00:06:45.920 00:06:45.920 real 0m0.089s 00:06:45.920 user 0m0.045s 00:06:45.920 sys 0m0.044s 00:06:45.920 18:24:45 env.env_pci -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:45.920 ************************************ 00:06:45.920 END TEST env_pci 00:06:45.920 18:24:45 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:45.920 ************************************ 00:06:46.179 18:24:45 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:46.179 18:24:45 env -- env/env.sh@15 -- # uname 00:06:46.179 18:24:45 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:46.179 18:24:45 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:46.179 18:24:45 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:46.179 18:24:45 env -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:06:46.179 18:24:45 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:46.179 18:24:45 env -- common/autotest_common.sh@10 -- # set +x 00:06:46.179 ************************************ 00:06:46.179 START TEST env_dpdk_post_init 00:06:46.179 ************************************ 00:06:46.179 18:24:45 env.env_dpdk_post_init -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:46.180 EAL: Detected CPU lcores: 10 00:06:46.180 EAL: Detected NUMA nodes: 1 00:06:46.180 EAL: Detected shared linkage of DPDK 00:06:46.180 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:46.180 EAL: Selected IOVA mode 'PA' 00:06:46.180 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:46.180 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:06:46.180 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:06:46.180 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:06:46.180 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:06:46.180 Starting DPDK initialization... 00:06:46.180 Starting SPDK post initialization... 00:06:46.180 SPDK NVMe probe 00:06:46.180 Attaching to 0000:00:10.0 00:06:46.180 Attaching to 0000:00:11.0 00:06:46.180 Attaching to 0000:00:12.0 00:06:46.180 Attaching to 0000:00:13.0 00:06:46.180 Attached to 0000:00:10.0 00:06:46.180 Attached to 0000:00:11.0 00:06:46.180 Attached to 0000:00:13.0 00:06:46.180 Attached to 0000:00:12.0 00:06:46.180 Cleaning up... 00:06:46.439 ************************************ 00:06:46.439 END TEST env_dpdk_post_init 00:06:46.439 ************************************ 00:06:46.439 00:06:46.439 real 0m0.240s 00:06:46.439 user 0m0.062s 00:06:46.439 sys 0m0.082s 00:06:46.439 18:24:46 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:46.439 18:24:46 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:46.439 18:24:46 env -- env/env.sh@26 -- # uname 00:06:46.439 18:24:46 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:46.439 18:24:46 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:46.439 18:24:46 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:46.439 18:24:46 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:46.439 18:24:46 env -- common/autotest_common.sh@10 -- # set +x 00:06:46.439 ************************************ 00:06:46.439 START TEST env_mem_callbacks 00:06:46.439 ************************************ 00:06:46.439 18:24:46 env.env_mem_callbacks -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:46.439 EAL: Detected CPU lcores: 10 00:06:46.439 EAL: Detected NUMA nodes: 1 00:06:46.439 EAL: Detected shared linkage of DPDK 00:06:46.439 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:46.439 EAL: Selected IOVA mode 'PA' 00:06:46.439 00:06:46.439 00:06:46.439 CUnit - A unit testing framework for C - Version 2.1-3 00:06:46.439 http://cunit.sourceforge.net/ 00:06:46.439 00:06:46.439 00:06:46.439 Suite: memory 00:06:46.439 Test: test ... 00:06:46.439 register 0x200000200000 2097152 00:06:46.439 malloc 3145728 00:06:46.439 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:46.439 register 0x200000400000 4194304 00:06:46.439 buf 0x200000500000 len 3145728 PASSED 00:06:46.439 malloc 64 00:06:46.439 buf 0x2000004fff40 len 64 PASSED 00:06:46.439 malloc 4194304 00:06:46.439 register 0x200000800000 6291456 00:06:46.439 buf 0x200000a00000 len 4194304 PASSED 00:06:46.439 free 0x200000500000 3145728 00:06:46.439 free 0x2000004fff40 64 00:06:46.439 unregister 0x200000400000 4194304 PASSED 00:06:46.439 free 0x200000a00000 4194304 00:06:46.439 unregister 0x200000800000 6291456 PASSED 00:06:46.439 malloc 8388608 00:06:46.439 register 0x200000400000 10485760 00:06:46.439 buf 0x200000600000 len 8388608 PASSED 00:06:46.439 free 0x200000600000 8388608 00:06:46.439 unregister 0x200000400000 10485760 PASSED 00:06:46.439 passed 00:06:46.439 00:06:46.439 Run Summary: Type Total Ran Passed Failed Inactive 00:06:46.439 suites 1 1 n/a 0 0 00:06:46.439 tests 1 1 1 0 0 00:06:46.439 asserts 15 15 15 0 n/a 00:06:46.439 00:06:46.439 Elapsed time = 0.010 seconds 00:06:46.439 00:06:46.439 real 0m0.181s 00:06:46.439 user 0m0.026s 00:06:46.439 sys 0m0.051s 00:06:46.439 18:24:46 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:46.439 18:24:46 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:46.439 ************************************ 00:06:46.439 END TEST env_mem_callbacks 00:06:46.439 ************************************ 00:06:46.699 00:06:46.699 real 0m2.818s 00:06:46.699 user 0m1.282s 00:06:46.699 sys 0m1.208s 00:06:46.699 18:24:46 env -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:46.699 18:24:46 env -- common/autotest_common.sh@10 -- # set +x 00:06:46.699 ************************************ 00:06:46.699 END TEST env 00:06:46.699 ************************************ 00:06:46.699 18:24:46 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:46.699 18:24:46 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:46.700 18:24:46 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:46.700 18:24:46 -- common/autotest_common.sh@10 -- # set +x 00:06:46.700 ************************************ 00:06:46.700 START TEST rpc 00:06:46.700 ************************************ 00:06:46.700 18:24:46 rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:46.700 * Looking for test storage... 00:06:46.700 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:46.700 18:24:46 rpc -- rpc/rpc.sh@65 -- # spdk_pid=74233 00:06:46.700 18:24:46 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:46.700 18:24:46 rpc -- rpc/rpc.sh@67 -- # waitforlisten 74233 00:06:46.700 18:24:46 rpc -- common/autotest_common.sh@827 -- # '[' -z 74233 ']' 00:06:46.700 18:24:46 rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.700 18:24:46 rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:46.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.700 18:24:46 rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.700 18:24:46 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:06:46.700 18:24:46 rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:46.700 18:24:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.959 [2024-07-23 18:24:46.757775] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:46.959 [2024-07-23 18:24:46.757906] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74233 ] 00:06:46.959 [2024-07-23 18:24:46.900078] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.959 [2024-07-23 18:24:46.953010] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:46.959 [2024-07-23 18:24:46.953064] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 74233' to capture a snapshot of events at runtime. 00:06:46.959 [2024-07-23 18:24:46.953073] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:46.959 [2024-07-23 18:24:46.953083] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:46.959 [2024-07-23 18:24:46.953093] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid74233 for offline analysis/debug. 00:06:46.959 [2024-07-23 18:24:46.953126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.531 18:24:47 rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:47.531 18:24:47 rpc -- common/autotest_common.sh@860 -- # return 0 00:06:47.531 18:24:47 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:47.531 18:24:47 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:47.531 18:24:47 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:47.531 18:24:47 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:47.531 18:24:47 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:47.531 18:24:47 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:47.531 18:24:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.531 ************************************ 00:06:47.531 START TEST rpc_integrity 00:06:47.531 ************************************ 00:06:47.531 18:24:47 rpc.rpc_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:06:47.531 18:24:47 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:47.531 18:24:47 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.531 18:24:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:47.531 18:24:47 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.531 18:24:47 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:47.531 18:24:47 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:47.795 18:24:47 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:47.795 18:24:47 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:47.795 18:24:47 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.795 18:24:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:47.795 18:24:47 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.795 18:24:47 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:47.795 18:24:47 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:47.795 18:24:47 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.795 18:24:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:47.795 18:24:47 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.795 18:24:47 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:47.795 { 00:06:47.795 "name": "Malloc0", 00:06:47.795 "aliases": [ 00:06:47.795 "461c9080-43ee-47c5-b012-ada35c9677d2" 00:06:47.795 ], 00:06:47.795 "product_name": "Malloc disk", 00:06:47.795 "block_size": 512, 00:06:47.795 "num_blocks": 16384, 00:06:47.795 "uuid": "461c9080-43ee-47c5-b012-ada35c9677d2", 00:06:47.795 "assigned_rate_limits": { 00:06:47.795 "rw_ios_per_sec": 0, 00:06:47.795 "rw_mbytes_per_sec": 0, 00:06:47.795 "r_mbytes_per_sec": 0, 00:06:47.795 "w_mbytes_per_sec": 0 00:06:47.795 }, 00:06:47.795 "claimed": false, 00:06:47.795 "zoned": false, 00:06:47.795 "supported_io_types": { 00:06:47.795 "read": true, 00:06:47.795 "write": true, 00:06:47.795 "unmap": true, 00:06:47.795 "write_zeroes": true, 00:06:47.795 "flush": true, 00:06:47.795 "reset": true, 00:06:47.795 "compare": false, 00:06:47.795 "compare_and_write": false, 00:06:47.795 "abort": true, 00:06:47.795 "nvme_admin": false, 00:06:47.795 "nvme_io": false 00:06:47.795 }, 00:06:47.795 "memory_domains": [ 00:06:47.795 { 00:06:47.795 "dma_device_id": "system", 00:06:47.795 "dma_device_type": 1 00:06:47.795 }, 00:06:47.795 { 00:06:47.795 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:47.795 "dma_device_type": 2 00:06:47.795 } 00:06:47.795 ], 00:06:47.795 "driver_specific": {} 00:06:47.795 } 00:06:47.795 ]' 00:06:47.795 18:24:47 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:47.795 18:24:47 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:47.795 18:24:47 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:47.795 18:24:47 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.795 18:24:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:47.795 [2024-07-23 18:24:47.695932] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:47.795 [2024-07-23 18:24:47.696003] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:47.795 [2024-07-23 18:24:47.696037] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:06:47.795 [2024-07-23 18:24:47.696054] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:47.795 [2024-07-23 18:24:47.698649] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:47.795 [2024-07-23 18:24:47.698690] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:47.795 Passthru0 00:06:47.795 18:24:47 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.795 18:24:47 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:47.795 18:24:47 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.795 18:24:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:47.795 18:24:47 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.795 18:24:47 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:47.795 { 00:06:47.795 "name": "Malloc0", 00:06:47.795 "aliases": [ 00:06:47.795 "461c9080-43ee-47c5-b012-ada35c9677d2" 00:06:47.795 ], 00:06:47.795 "product_name": "Malloc disk", 00:06:47.795 "block_size": 512, 00:06:47.795 "num_blocks": 16384, 00:06:47.795 "uuid": "461c9080-43ee-47c5-b012-ada35c9677d2", 00:06:47.795 "assigned_rate_limits": { 00:06:47.795 "rw_ios_per_sec": 0, 00:06:47.795 "rw_mbytes_per_sec": 0, 00:06:47.795 "r_mbytes_per_sec": 0, 00:06:47.795 "w_mbytes_per_sec": 0 00:06:47.795 }, 00:06:47.795 "claimed": true, 00:06:47.795 "claim_type": "exclusive_write", 00:06:47.795 "zoned": false, 00:06:47.795 "supported_io_types": { 00:06:47.795 "read": true, 00:06:47.795 "write": true, 00:06:47.795 "unmap": true, 00:06:47.796 "write_zeroes": true, 00:06:47.796 "flush": true, 00:06:47.796 "reset": true, 00:06:47.796 "compare": false, 00:06:47.796 "compare_and_write": false, 00:06:47.796 "abort": true, 00:06:47.796 "nvme_admin": false, 00:06:47.796 "nvme_io": false 00:06:47.796 }, 00:06:47.796 "memory_domains": [ 00:06:47.796 { 00:06:47.796 "dma_device_id": "system", 00:06:47.796 "dma_device_type": 1 00:06:47.796 }, 00:06:47.796 { 00:06:47.796 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:47.796 "dma_device_type": 2 00:06:47.796 } 00:06:47.796 ], 00:06:47.796 "driver_specific": {} 00:06:47.796 }, 00:06:47.796 { 00:06:47.796 "name": "Passthru0", 00:06:47.796 "aliases": [ 00:06:47.796 "c85ffd6c-0ffe-59cd-bd8e-96f893554380" 00:06:47.796 ], 00:06:47.796 "product_name": "passthru", 00:06:47.796 "block_size": 512, 00:06:47.796 "num_blocks": 16384, 00:06:47.796 "uuid": "c85ffd6c-0ffe-59cd-bd8e-96f893554380", 00:06:47.796 "assigned_rate_limits": { 00:06:47.796 "rw_ios_per_sec": 0, 00:06:47.796 "rw_mbytes_per_sec": 0, 00:06:47.796 "r_mbytes_per_sec": 0, 00:06:47.796 "w_mbytes_per_sec": 0 00:06:47.796 }, 00:06:47.796 "claimed": false, 00:06:47.796 "zoned": false, 00:06:47.796 "supported_io_types": { 00:06:47.796 "read": true, 00:06:47.796 "write": true, 00:06:47.796 "unmap": true, 00:06:47.796 "write_zeroes": true, 00:06:47.796 "flush": true, 00:06:47.796 "reset": true, 00:06:47.796 "compare": false, 00:06:47.796 "compare_and_write": false, 00:06:47.796 "abort": true, 00:06:47.796 "nvme_admin": false, 00:06:47.796 "nvme_io": false 00:06:47.796 }, 00:06:47.796 "memory_domains": [ 00:06:47.796 { 00:06:47.796 "dma_device_id": "system", 00:06:47.796 "dma_device_type": 1 00:06:47.796 }, 00:06:47.796 { 00:06:47.796 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:47.796 "dma_device_type": 2 00:06:47.796 } 00:06:47.796 ], 00:06:47.796 "driver_specific": { 00:06:47.796 "passthru": { 00:06:47.796 "name": "Passthru0", 00:06:47.796 "base_bdev_name": "Malloc0" 00:06:47.796 } 00:06:47.796 } 00:06:47.796 } 00:06:47.796 ]' 00:06:47.796 18:24:47 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:47.796 18:24:47 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:47.796 18:24:47 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:47.796 18:24:47 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.796 18:24:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:47.796 18:24:47 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.796 18:24:47 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:47.796 18:24:47 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.796 18:24:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:47.796 18:24:47 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.796 18:24:47 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:47.796 18:24:47 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.796 18:24:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:47.796 18:24:47 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.796 18:24:47 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:47.796 18:24:47 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:48.055 18:24:47 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:48.055 00:06:48.055 real 0m0.295s 00:06:48.055 user 0m0.175s 00:06:48.055 sys 0m0.047s 00:06:48.055 18:24:47 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:48.055 18:24:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:48.055 ************************************ 00:06:48.055 END TEST rpc_integrity 00:06:48.055 ************************************ 00:06:48.055 18:24:47 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:48.055 18:24:47 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:48.055 18:24:47 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:48.055 18:24:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.055 ************************************ 00:06:48.055 START TEST rpc_plugins 00:06:48.055 ************************************ 00:06:48.055 18:24:47 rpc.rpc_plugins -- common/autotest_common.sh@1121 -- # rpc_plugins 00:06:48.055 18:24:47 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:48.055 18:24:47 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.055 18:24:47 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:48.055 18:24:47 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.055 18:24:47 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:48.055 18:24:47 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:48.055 18:24:47 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.055 18:24:47 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:48.055 18:24:47 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.055 18:24:47 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:48.055 { 00:06:48.055 "name": "Malloc1", 00:06:48.056 "aliases": [ 00:06:48.056 "e9402881-a572-4606-954b-2a5f8911ab10" 00:06:48.056 ], 00:06:48.056 "product_name": "Malloc disk", 00:06:48.056 "block_size": 4096, 00:06:48.056 "num_blocks": 256, 00:06:48.056 "uuid": "e9402881-a572-4606-954b-2a5f8911ab10", 00:06:48.056 "assigned_rate_limits": { 00:06:48.056 "rw_ios_per_sec": 0, 00:06:48.056 "rw_mbytes_per_sec": 0, 00:06:48.056 "r_mbytes_per_sec": 0, 00:06:48.056 "w_mbytes_per_sec": 0 00:06:48.056 }, 00:06:48.056 "claimed": false, 00:06:48.056 "zoned": false, 00:06:48.056 "supported_io_types": { 00:06:48.056 "read": true, 00:06:48.056 "write": true, 00:06:48.056 "unmap": true, 00:06:48.056 "write_zeroes": true, 00:06:48.056 "flush": true, 00:06:48.056 "reset": true, 00:06:48.056 "compare": false, 00:06:48.056 "compare_and_write": false, 00:06:48.056 "abort": true, 00:06:48.056 "nvme_admin": false, 00:06:48.056 "nvme_io": false 00:06:48.056 }, 00:06:48.056 "memory_domains": [ 00:06:48.056 { 00:06:48.056 "dma_device_id": "system", 00:06:48.056 "dma_device_type": 1 00:06:48.056 }, 00:06:48.056 { 00:06:48.056 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:48.056 "dma_device_type": 2 00:06:48.056 } 00:06:48.056 ], 00:06:48.056 "driver_specific": {} 00:06:48.056 } 00:06:48.056 ]' 00:06:48.056 18:24:47 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:48.056 18:24:48 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:48.056 18:24:48 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:48.056 18:24:48 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.056 18:24:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:48.056 18:24:48 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.056 18:24:48 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:48.056 18:24:48 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.056 18:24:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:48.056 18:24:48 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.056 18:24:48 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:48.056 18:24:48 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:48.056 18:24:48 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:48.056 00:06:48.056 real 0m0.164s 00:06:48.056 user 0m0.093s 00:06:48.056 sys 0m0.028s 00:06:48.056 18:24:48 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:48.056 18:24:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:48.056 ************************************ 00:06:48.056 END TEST rpc_plugins 00:06:48.056 ************************************ 00:06:48.316 18:24:48 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:48.316 18:24:48 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:48.316 18:24:48 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:48.316 18:24:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.316 ************************************ 00:06:48.316 START TEST rpc_trace_cmd_test 00:06:48.316 ************************************ 00:06:48.316 18:24:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1121 -- # rpc_trace_cmd_test 00:06:48.316 18:24:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:48.316 18:24:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:48.316 18:24:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.316 18:24:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.316 18:24:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.316 18:24:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:48.316 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid74233", 00:06:48.316 "tpoint_group_mask": "0x8", 00:06:48.316 "iscsi_conn": { 00:06:48.316 "mask": "0x2", 00:06:48.316 "tpoint_mask": "0x0" 00:06:48.316 }, 00:06:48.316 "scsi": { 00:06:48.316 "mask": "0x4", 00:06:48.316 "tpoint_mask": "0x0" 00:06:48.316 }, 00:06:48.316 "bdev": { 00:06:48.316 "mask": "0x8", 00:06:48.316 "tpoint_mask": "0xffffffffffffffff" 00:06:48.316 }, 00:06:48.316 "nvmf_rdma": { 00:06:48.316 "mask": "0x10", 00:06:48.316 "tpoint_mask": "0x0" 00:06:48.316 }, 00:06:48.316 "nvmf_tcp": { 00:06:48.316 "mask": "0x20", 00:06:48.316 "tpoint_mask": "0x0" 00:06:48.316 }, 00:06:48.316 "ftl": { 00:06:48.316 "mask": "0x40", 00:06:48.316 "tpoint_mask": "0x0" 00:06:48.316 }, 00:06:48.316 "blobfs": { 00:06:48.316 "mask": "0x80", 00:06:48.316 "tpoint_mask": "0x0" 00:06:48.316 }, 00:06:48.316 "dsa": { 00:06:48.316 "mask": "0x200", 00:06:48.316 "tpoint_mask": "0x0" 00:06:48.316 }, 00:06:48.316 "thread": { 00:06:48.316 "mask": "0x400", 00:06:48.316 "tpoint_mask": "0x0" 00:06:48.316 }, 00:06:48.316 "nvme_pcie": { 00:06:48.316 "mask": "0x800", 00:06:48.316 "tpoint_mask": "0x0" 00:06:48.316 }, 00:06:48.316 "iaa": { 00:06:48.316 "mask": "0x1000", 00:06:48.316 "tpoint_mask": "0x0" 00:06:48.316 }, 00:06:48.316 "nvme_tcp": { 00:06:48.316 "mask": "0x2000", 00:06:48.316 "tpoint_mask": "0x0" 00:06:48.316 }, 00:06:48.316 "bdev_nvme": { 00:06:48.316 "mask": "0x4000", 00:06:48.316 "tpoint_mask": "0x0" 00:06:48.316 }, 00:06:48.316 "sock": { 00:06:48.316 "mask": "0x8000", 00:06:48.316 "tpoint_mask": "0x0" 00:06:48.316 } 00:06:48.316 }' 00:06:48.316 18:24:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:48.316 18:24:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:06:48.316 18:24:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:48.316 18:24:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:48.316 18:24:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:48.316 18:24:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:48.316 18:24:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:48.316 18:24:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:48.316 18:24:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:48.316 18:24:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:48.316 00:06:48.316 real 0m0.225s 00:06:48.316 user 0m0.179s 00:06:48.316 sys 0m0.039s 00:06:48.316 18:24:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:48.316 18:24:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.316 ************************************ 00:06:48.316 END TEST rpc_trace_cmd_test 00:06:48.316 ************************************ 00:06:48.575 18:24:48 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:48.575 18:24:48 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:48.575 18:24:48 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:48.575 18:24:48 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:48.575 18:24:48 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:48.575 18:24:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.575 ************************************ 00:06:48.575 START TEST rpc_daemon_integrity 00:06:48.575 ************************************ 00:06:48.575 18:24:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:06:48.575 18:24:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:48.575 18:24:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.575 18:24:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:48.575 18:24:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.575 18:24:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:48.575 18:24:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:48.575 18:24:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:48.575 18:24:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:48.575 18:24:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.575 18:24:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:48.575 18:24:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.575 18:24:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:48.575 18:24:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:48.575 18:24:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.575 18:24:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:48.575 18:24:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.575 18:24:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:48.575 { 00:06:48.575 "name": "Malloc2", 00:06:48.575 "aliases": [ 00:06:48.575 "3f3f05cf-1ac9-4a21-bd0c-4b6ec66baa73" 00:06:48.575 ], 00:06:48.575 "product_name": "Malloc disk", 00:06:48.575 "block_size": 512, 00:06:48.575 "num_blocks": 16384, 00:06:48.575 "uuid": "3f3f05cf-1ac9-4a21-bd0c-4b6ec66baa73", 00:06:48.575 "assigned_rate_limits": { 00:06:48.575 "rw_ios_per_sec": 0, 00:06:48.575 "rw_mbytes_per_sec": 0, 00:06:48.575 "r_mbytes_per_sec": 0, 00:06:48.575 "w_mbytes_per_sec": 0 00:06:48.575 }, 00:06:48.575 "claimed": false, 00:06:48.575 "zoned": false, 00:06:48.575 "supported_io_types": { 00:06:48.575 "read": true, 00:06:48.575 "write": true, 00:06:48.575 "unmap": true, 00:06:48.575 "write_zeroes": true, 00:06:48.575 "flush": true, 00:06:48.575 "reset": true, 00:06:48.575 "compare": false, 00:06:48.575 "compare_and_write": false, 00:06:48.575 "abort": true, 00:06:48.575 "nvme_admin": false, 00:06:48.575 "nvme_io": false 00:06:48.575 }, 00:06:48.575 "memory_domains": [ 00:06:48.575 { 00:06:48.575 "dma_device_id": "system", 00:06:48.575 "dma_device_type": 1 00:06:48.575 }, 00:06:48.575 { 00:06:48.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:48.575 "dma_device_type": 2 00:06:48.575 } 00:06:48.575 ], 00:06:48.575 "driver_specific": {} 00:06:48.575 } 00:06:48.575 ]' 00:06:48.575 18:24:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:48.575 18:24:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:48.575 18:24:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:48.575 18:24:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.575 18:24:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:48.575 [2024-07-23 18:24:48.566486] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:48.575 [2024-07-23 18:24:48.566550] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:48.575 [2024-07-23 18:24:48.566584] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:06:48.575 [2024-07-23 18:24:48.566597] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:48.575 [2024-07-23 18:24:48.568987] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:48.575 [2024-07-23 18:24:48.569028] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:48.575 Passthru0 00:06:48.575 18:24:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.575 18:24:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:48.575 18:24:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.575 18:24:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:48.576 18:24:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.576 18:24:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:48.576 { 00:06:48.576 "name": "Malloc2", 00:06:48.576 "aliases": [ 00:06:48.576 "3f3f05cf-1ac9-4a21-bd0c-4b6ec66baa73" 00:06:48.576 ], 00:06:48.576 "product_name": "Malloc disk", 00:06:48.576 "block_size": 512, 00:06:48.576 "num_blocks": 16384, 00:06:48.576 "uuid": "3f3f05cf-1ac9-4a21-bd0c-4b6ec66baa73", 00:06:48.576 "assigned_rate_limits": { 00:06:48.576 "rw_ios_per_sec": 0, 00:06:48.576 "rw_mbytes_per_sec": 0, 00:06:48.576 "r_mbytes_per_sec": 0, 00:06:48.576 "w_mbytes_per_sec": 0 00:06:48.576 }, 00:06:48.576 "claimed": true, 00:06:48.576 "claim_type": "exclusive_write", 00:06:48.576 "zoned": false, 00:06:48.576 "supported_io_types": { 00:06:48.576 "read": true, 00:06:48.576 "write": true, 00:06:48.576 "unmap": true, 00:06:48.576 "write_zeroes": true, 00:06:48.576 "flush": true, 00:06:48.576 "reset": true, 00:06:48.576 "compare": false, 00:06:48.576 "compare_and_write": false, 00:06:48.576 "abort": true, 00:06:48.576 "nvme_admin": false, 00:06:48.576 "nvme_io": false 00:06:48.576 }, 00:06:48.576 "memory_domains": [ 00:06:48.576 { 00:06:48.576 "dma_device_id": "system", 00:06:48.576 "dma_device_type": 1 00:06:48.576 }, 00:06:48.576 { 00:06:48.576 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:48.576 "dma_device_type": 2 00:06:48.576 } 00:06:48.576 ], 00:06:48.576 "driver_specific": {} 00:06:48.576 }, 00:06:48.576 { 00:06:48.576 "name": "Passthru0", 00:06:48.576 "aliases": [ 00:06:48.576 "cc81a454-a6f6-5262-8f80-0d955ebd809c" 00:06:48.576 ], 00:06:48.576 "product_name": "passthru", 00:06:48.576 "block_size": 512, 00:06:48.576 "num_blocks": 16384, 00:06:48.576 "uuid": "cc81a454-a6f6-5262-8f80-0d955ebd809c", 00:06:48.576 "assigned_rate_limits": { 00:06:48.576 "rw_ios_per_sec": 0, 00:06:48.576 "rw_mbytes_per_sec": 0, 00:06:48.576 "r_mbytes_per_sec": 0, 00:06:48.576 "w_mbytes_per_sec": 0 00:06:48.576 }, 00:06:48.576 "claimed": false, 00:06:48.576 "zoned": false, 00:06:48.576 "supported_io_types": { 00:06:48.576 "read": true, 00:06:48.576 "write": true, 00:06:48.576 "unmap": true, 00:06:48.576 "write_zeroes": true, 00:06:48.576 "flush": true, 00:06:48.576 "reset": true, 00:06:48.576 "compare": false, 00:06:48.576 "compare_and_write": false, 00:06:48.576 "abort": true, 00:06:48.576 "nvme_admin": false, 00:06:48.576 "nvme_io": false 00:06:48.576 }, 00:06:48.576 "memory_domains": [ 00:06:48.576 { 00:06:48.576 "dma_device_id": "system", 00:06:48.576 "dma_device_type": 1 00:06:48.576 }, 00:06:48.576 { 00:06:48.576 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:48.576 "dma_device_type": 2 00:06:48.576 } 00:06:48.576 ], 00:06:48.576 "driver_specific": { 00:06:48.576 "passthru": { 00:06:48.576 "name": "Passthru0", 00:06:48.576 "base_bdev_name": "Malloc2" 00:06:48.576 } 00:06:48.576 } 00:06:48.576 } 00:06:48.576 ]' 00:06:48.576 18:24:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:48.834 18:24:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:48.834 18:24:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:48.834 18:24:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.834 18:24:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:48.834 18:24:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.834 18:24:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:48.834 18:24:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.834 18:24:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:48.834 18:24:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.834 18:24:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:48.834 18:24:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.834 18:24:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:48.834 18:24:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.834 18:24:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:48.834 18:24:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:48.834 18:24:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:48.834 00:06:48.834 real 0m0.321s 00:06:48.834 user 0m0.200s 00:06:48.834 sys 0m0.052s 00:06:48.834 18:24:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:48.834 18:24:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:48.834 ************************************ 00:06:48.834 END TEST rpc_daemon_integrity 00:06:48.834 ************************************ 00:06:48.834 18:24:48 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:48.834 18:24:48 rpc -- rpc/rpc.sh@84 -- # killprocess 74233 00:06:48.834 18:24:48 rpc -- common/autotest_common.sh@946 -- # '[' -z 74233 ']' 00:06:48.834 18:24:48 rpc -- common/autotest_common.sh@950 -- # kill -0 74233 00:06:48.834 18:24:48 rpc -- common/autotest_common.sh@951 -- # uname 00:06:48.834 18:24:48 rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:48.834 18:24:48 rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74233 00:06:48.834 18:24:48 rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:48.834 18:24:48 rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:48.834 killing process with pid 74233 00:06:48.834 18:24:48 rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74233' 00:06:48.834 18:24:48 rpc -- common/autotest_common.sh@965 -- # kill 74233 00:06:48.834 18:24:48 rpc -- common/autotest_common.sh@970 -- # wait 74233 00:06:49.400 00:06:49.400 real 0m2.636s 00:06:49.400 user 0m3.212s 00:06:49.400 sys 0m0.781s 00:06:49.400 18:24:49 rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:49.400 18:24:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.400 ************************************ 00:06:49.400 END TEST rpc 00:06:49.400 ************************************ 00:06:49.400 18:24:49 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:49.400 18:24:49 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:49.400 18:24:49 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:49.400 18:24:49 -- common/autotest_common.sh@10 -- # set +x 00:06:49.400 ************************************ 00:06:49.400 START TEST skip_rpc 00:06:49.400 ************************************ 00:06:49.400 18:24:49 skip_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:49.400 * Looking for test storage... 00:06:49.400 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:49.400 18:24:49 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:49.400 18:24:49 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:49.400 18:24:49 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:49.400 18:24:49 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:49.400 18:24:49 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:49.400 18:24:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.400 ************************************ 00:06:49.400 START TEST skip_rpc 00:06:49.400 ************************************ 00:06:49.400 18:24:49 skip_rpc.skip_rpc -- common/autotest_common.sh@1121 -- # test_skip_rpc 00:06:49.400 18:24:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=74431 00:06:49.400 18:24:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:49.400 18:24:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:49.400 18:24:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:49.659 [2024-07-23 18:24:49.486605] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:49.659 [2024-07-23 18:24:49.486728] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74431 ] 00:06:49.659 [2024-07-23 18:24:49.632688] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.659 [2024-07-23 18:24:49.680946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.933 18:24:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:54.933 18:24:54 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:54.933 18:24:54 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:54.933 18:24:54 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:54.933 18:24:54 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:54.933 18:24:54 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:54.933 18:24:54 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:54.933 18:24:54 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:06:54.933 18:24:54 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:54.933 18:24:54 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:54.933 18:24:54 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:54.933 18:24:54 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:54.933 18:24:54 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:54.933 18:24:54 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:54.933 18:24:54 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:54.933 18:24:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:54.933 18:24:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 74431 00:06:54.933 18:24:54 skip_rpc.skip_rpc -- common/autotest_common.sh@946 -- # '[' -z 74431 ']' 00:06:54.933 18:24:54 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # kill -0 74431 00:06:54.933 18:24:54 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # uname 00:06:54.933 18:24:54 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:54.933 18:24:54 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74431 00:06:54.933 18:24:54 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:54.933 18:24:54 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:54.933 killing process with pid 74431 00:06:54.933 18:24:54 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74431' 00:06:54.933 18:24:54 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # kill 74431 00:06:54.933 18:24:54 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # wait 74431 00:06:54.933 00:06:54.933 real 0m5.424s 00:06:54.933 user 0m5.039s 00:06:54.933 sys 0m0.308s 00:06:54.933 18:24:54 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:54.933 18:24:54 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:54.933 ************************************ 00:06:54.933 END TEST skip_rpc 00:06:54.933 ************************************ 00:06:54.933 18:24:54 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:54.933 18:24:54 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:54.933 18:24:54 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:54.933 18:24:54 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:54.933 ************************************ 00:06:54.933 START TEST skip_rpc_with_json 00:06:54.933 ************************************ 00:06:54.933 18:24:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_json 00:06:54.933 18:24:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:54.933 18:24:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=74514 00:06:54.933 18:24:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:54.933 18:24:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:54.933 18:24:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 74514 00:06:54.933 18:24:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@827 -- # '[' -z 74514 ']' 00:06:54.933 18:24:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.933 18:24:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:54.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.933 18:24:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.933 18:24:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:54.933 18:24:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:54.933 [2024-07-23 18:24:54.970928] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:54.933 [2024-07-23 18:24:54.971054] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74514 ] 00:06:55.194 [2024-07-23 18:24:55.114653] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.194 [2024-07-23 18:24:55.165975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.764 18:24:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:55.764 18:24:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # return 0 00:06:55.764 18:24:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:55.764 18:24:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.764 18:24:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:55.764 [2024-07-23 18:24:55.760745] nvmf_rpc.c:2558:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:55.764 request: 00:06:55.764 { 00:06:55.764 "trtype": "tcp", 00:06:55.764 "method": "nvmf_get_transports", 00:06:55.764 "req_id": 1 00:06:55.764 } 00:06:55.764 Got JSON-RPC error response 00:06:55.764 response: 00:06:55.764 { 00:06:55.764 "code": -19, 00:06:55.764 "message": "No such device" 00:06:55.764 } 00:06:55.764 18:24:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:55.764 18:24:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:55.764 18:24:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.764 18:24:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:55.764 [2024-07-23 18:24:55.772822] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:55.764 18:24:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.764 18:24:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:55.764 18:24:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.764 18:24:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:56.024 18:24:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.024 18:24:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:56.024 { 00:06:56.024 "subsystems": [ 00:06:56.024 { 00:06:56.024 "subsystem": "keyring", 00:06:56.024 "config": [] 00:06:56.024 }, 00:06:56.024 { 00:06:56.024 "subsystem": "iobuf", 00:06:56.024 "config": [ 00:06:56.024 { 00:06:56.024 "method": "iobuf_set_options", 00:06:56.024 "params": { 00:06:56.024 "small_pool_count": 8192, 00:06:56.024 "large_pool_count": 1024, 00:06:56.024 "small_bufsize": 8192, 00:06:56.024 "large_bufsize": 135168 00:06:56.024 } 00:06:56.024 } 00:06:56.024 ] 00:06:56.024 }, 00:06:56.024 { 00:06:56.024 "subsystem": "sock", 00:06:56.024 "config": [ 00:06:56.024 { 00:06:56.024 "method": "sock_set_default_impl", 00:06:56.024 "params": { 00:06:56.024 "impl_name": "posix" 00:06:56.024 } 00:06:56.024 }, 00:06:56.024 { 00:06:56.024 "method": "sock_impl_set_options", 00:06:56.024 "params": { 00:06:56.024 "impl_name": "ssl", 00:06:56.024 "recv_buf_size": 4096, 00:06:56.024 "send_buf_size": 4096, 00:06:56.024 "enable_recv_pipe": true, 00:06:56.024 "enable_quickack": false, 00:06:56.024 "enable_placement_id": 0, 00:06:56.024 "enable_zerocopy_send_server": true, 00:06:56.024 "enable_zerocopy_send_client": false, 00:06:56.024 "zerocopy_threshold": 0, 00:06:56.024 "tls_version": 0, 00:06:56.024 "enable_ktls": false 00:06:56.024 } 00:06:56.024 }, 00:06:56.024 { 00:06:56.024 "method": "sock_impl_set_options", 00:06:56.024 "params": { 00:06:56.024 "impl_name": "posix", 00:06:56.024 "recv_buf_size": 2097152, 00:06:56.024 "send_buf_size": 2097152, 00:06:56.024 "enable_recv_pipe": true, 00:06:56.024 "enable_quickack": false, 00:06:56.024 "enable_placement_id": 0, 00:06:56.024 "enable_zerocopy_send_server": true, 00:06:56.024 "enable_zerocopy_send_client": false, 00:06:56.024 "zerocopy_threshold": 0, 00:06:56.024 "tls_version": 0, 00:06:56.024 "enable_ktls": false 00:06:56.024 } 00:06:56.024 } 00:06:56.024 ] 00:06:56.024 }, 00:06:56.024 { 00:06:56.024 "subsystem": "vmd", 00:06:56.024 "config": [] 00:06:56.024 }, 00:06:56.024 { 00:06:56.024 "subsystem": "accel", 00:06:56.024 "config": [ 00:06:56.024 { 00:06:56.024 "method": "accel_set_options", 00:06:56.024 "params": { 00:06:56.024 "small_cache_size": 128, 00:06:56.024 "large_cache_size": 16, 00:06:56.024 "task_count": 2048, 00:06:56.024 "sequence_count": 2048, 00:06:56.024 "buf_count": 2048 00:06:56.024 } 00:06:56.024 } 00:06:56.024 ] 00:06:56.024 }, 00:06:56.024 { 00:06:56.024 "subsystem": "bdev", 00:06:56.024 "config": [ 00:06:56.024 { 00:06:56.025 "method": "bdev_set_options", 00:06:56.025 "params": { 00:06:56.025 "bdev_io_pool_size": 65535, 00:06:56.025 "bdev_io_cache_size": 256, 00:06:56.025 "bdev_auto_examine": true, 00:06:56.025 "iobuf_small_cache_size": 128, 00:06:56.025 "iobuf_large_cache_size": 16 00:06:56.025 } 00:06:56.025 }, 00:06:56.025 { 00:06:56.025 "method": "bdev_raid_set_options", 00:06:56.025 "params": { 00:06:56.025 "process_window_size_kb": 1024 00:06:56.025 } 00:06:56.025 }, 00:06:56.025 { 00:06:56.025 "method": "bdev_iscsi_set_options", 00:06:56.025 "params": { 00:06:56.025 "timeout_sec": 30 00:06:56.025 } 00:06:56.025 }, 00:06:56.025 { 00:06:56.025 "method": "bdev_nvme_set_options", 00:06:56.025 "params": { 00:06:56.025 "action_on_timeout": "none", 00:06:56.025 "timeout_us": 0, 00:06:56.025 "timeout_admin_us": 0, 00:06:56.025 "keep_alive_timeout_ms": 10000, 00:06:56.025 "arbitration_burst": 0, 00:06:56.025 "low_priority_weight": 0, 00:06:56.025 "medium_priority_weight": 0, 00:06:56.025 "high_priority_weight": 0, 00:06:56.025 "nvme_adminq_poll_period_us": 10000, 00:06:56.025 "nvme_ioq_poll_period_us": 0, 00:06:56.025 "io_queue_requests": 0, 00:06:56.025 "delay_cmd_submit": true, 00:06:56.025 "transport_retry_count": 4, 00:06:56.025 "bdev_retry_count": 3, 00:06:56.025 "transport_ack_timeout": 0, 00:06:56.025 "ctrlr_loss_timeout_sec": 0, 00:06:56.025 "reconnect_delay_sec": 0, 00:06:56.025 "fast_io_fail_timeout_sec": 0, 00:06:56.025 "disable_auto_failback": false, 00:06:56.025 "generate_uuids": false, 00:06:56.025 "transport_tos": 0, 00:06:56.025 "nvme_error_stat": false, 00:06:56.025 "rdma_srq_size": 0, 00:06:56.025 "io_path_stat": false, 00:06:56.025 "allow_accel_sequence": false, 00:06:56.025 "rdma_max_cq_size": 0, 00:06:56.025 "rdma_cm_event_timeout_ms": 0, 00:06:56.025 "dhchap_digests": [ 00:06:56.025 "sha256", 00:06:56.025 "sha384", 00:06:56.025 "sha512" 00:06:56.025 ], 00:06:56.025 "dhchap_dhgroups": [ 00:06:56.025 "null", 00:06:56.025 "ffdhe2048", 00:06:56.025 "ffdhe3072", 00:06:56.025 "ffdhe4096", 00:06:56.025 "ffdhe6144", 00:06:56.025 "ffdhe8192" 00:06:56.025 ] 00:06:56.025 } 00:06:56.025 }, 00:06:56.025 { 00:06:56.025 "method": "bdev_nvme_set_hotplug", 00:06:56.025 "params": { 00:06:56.025 "period_us": 100000, 00:06:56.025 "enable": false 00:06:56.025 } 00:06:56.025 }, 00:06:56.025 { 00:06:56.025 "method": "bdev_wait_for_examine" 00:06:56.025 } 00:06:56.025 ] 00:06:56.025 }, 00:06:56.025 { 00:06:56.025 "subsystem": "scsi", 00:06:56.025 "config": null 00:06:56.025 }, 00:06:56.025 { 00:06:56.025 "subsystem": "scheduler", 00:06:56.025 "config": [ 00:06:56.025 { 00:06:56.025 "method": "framework_set_scheduler", 00:06:56.025 "params": { 00:06:56.025 "name": "static" 00:06:56.025 } 00:06:56.025 } 00:06:56.025 ] 00:06:56.025 }, 00:06:56.025 { 00:06:56.025 "subsystem": "vhost_scsi", 00:06:56.025 "config": [] 00:06:56.025 }, 00:06:56.025 { 00:06:56.025 "subsystem": "vhost_blk", 00:06:56.025 "config": [] 00:06:56.025 }, 00:06:56.025 { 00:06:56.025 "subsystem": "ublk", 00:06:56.025 "config": [] 00:06:56.025 }, 00:06:56.025 { 00:06:56.025 "subsystem": "nbd", 00:06:56.025 "config": [] 00:06:56.025 }, 00:06:56.025 { 00:06:56.025 "subsystem": "nvmf", 00:06:56.025 "config": [ 00:06:56.025 { 00:06:56.025 "method": "nvmf_set_config", 00:06:56.025 "params": { 00:06:56.025 "discovery_filter": "match_any", 00:06:56.025 "admin_cmd_passthru": { 00:06:56.025 "identify_ctrlr": false 00:06:56.025 } 00:06:56.025 } 00:06:56.025 }, 00:06:56.025 { 00:06:56.025 "method": "nvmf_set_max_subsystems", 00:06:56.025 "params": { 00:06:56.025 "max_subsystems": 1024 00:06:56.025 } 00:06:56.025 }, 00:06:56.025 { 00:06:56.025 "method": "nvmf_set_crdt", 00:06:56.025 "params": { 00:06:56.025 "crdt1": 0, 00:06:56.025 "crdt2": 0, 00:06:56.025 "crdt3": 0 00:06:56.025 } 00:06:56.025 }, 00:06:56.025 { 00:06:56.025 "method": "nvmf_create_transport", 00:06:56.025 "params": { 00:06:56.025 "trtype": "TCP", 00:06:56.025 "max_queue_depth": 128, 00:06:56.025 "max_io_qpairs_per_ctrlr": 127, 00:06:56.025 "in_capsule_data_size": 4096, 00:06:56.025 "max_io_size": 131072, 00:06:56.025 "io_unit_size": 131072, 00:06:56.025 "max_aq_depth": 128, 00:06:56.025 "num_shared_buffers": 511, 00:06:56.025 "buf_cache_size": 4294967295, 00:06:56.025 "dif_insert_or_strip": false, 00:06:56.025 "zcopy": false, 00:06:56.025 "c2h_success": true, 00:06:56.025 "sock_priority": 0, 00:06:56.025 "abort_timeout_sec": 1, 00:06:56.025 "ack_timeout": 0, 00:06:56.025 "data_wr_pool_size": 0 00:06:56.025 } 00:06:56.025 } 00:06:56.025 ] 00:06:56.025 }, 00:06:56.025 { 00:06:56.025 "subsystem": "iscsi", 00:06:56.025 "config": [ 00:06:56.025 { 00:06:56.026 "method": "iscsi_set_options", 00:06:56.026 "params": { 00:06:56.026 "node_base": "iqn.2016-06.io.spdk", 00:06:56.026 "max_sessions": 128, 00:06:56.026 "max_connections_per_session": 2, 00:06:56.026 "max_queue_depth": 64, 00:06:56.026 "default_time2wait": 2, 00:06:56.026 "default_time2retain": 20, 00:06:56.026 "first_burst_length": 8192, 00:06:56.026 "immediate_data": true, 00:06:56.026 "allow_duplicated_isid": false, 00:06:56.026 "error_recovery_level": 0, 00:06:56.026 "nop_timeout": 60, 00:06:56.026 "nop_in_interval": 30, 00:06:56.026 "disable_chap": false, 00:06:56.026 "require_chap": false, 00:06:56.026 "mutual_chap": false, 00:06:56.026 "chap_group": 0, 00:06:56.026 "max_large_datain_per_connection": 64, 00:06:56.026 "max_r2t_per_connection": 4, 00:06:56.026 "pdu_pool_size": 36864, 00:06:56.026 "immediate_data_pool_size": 16384, 00:06:56.026 "data_out_pool_size": 2048 00:06:56.026 } 00:06:56.026 } 00:06:56.026 ] 00:06:56.026 } 00:06:56.026 ] 00:06:56.026 } 00:06:56.026 18:24:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:56.026 18:24:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 74514 00:06:56.026 18:24:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 74514 ']' 00:06:56.026 18:24:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 74514 00:06:56.026 18:24:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:06:56.026 18:24:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:56.026 18:24:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74514 00:06:56.026 18:24:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:56.026 18:24:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:56.026 killing process with pid 74514 00:06:56.026 18:24:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74514' 00:06:56.026 18:24:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 74514 00:06:56.026 18:24:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 74514 00:06:56.286 18:24:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:56.286 18:24:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=74539 00:06:56.286 18:24:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:07:01.571 18:25:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 74539 00:07:01.571 18:25:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 74539 ']' 00:07:01.571 18:25:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 74539 00:07:01.571 18:25:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:07:01.571 18:25:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:01.571 18:25:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74539 00:07:01.571 18:25:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:01.571 18:25:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:01.571 killing process with pid 74539 00:07:01.571 18:25:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74539' 00:07:01.571 18:25:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 74539 00:07:01.571 18:25:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 74539 00:07:01.831 18:25:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:07:01.831 18:25:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:07:01.831 00:07:01.831 real 0m6.870s 00:07:01.831 user 0m6.418s 00:07:01.831 sys 0m0.672s 00:07:01.831 18:25:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:01.831 18:25:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:01.831 ************************************ 00:07:01.831 END TEST skip_rpc_with_json 00:07:01.831 ************************************ 00:07:01.831 18:25:01 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:07:01.831 18:25:01 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:01.831 18:25:01 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:01.831 18:25:01 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.831 ************************************ 00:07:01.831 START TEST skip_rpc_with_delay 00:07:01.831 ************************************ 00:07:01.831 18:25:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_delay 00:07:01.831 18:25:01 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:01.831 18:25:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:07:01.831 18:25:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:01.831 18:25:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:01.831 18:25:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:01.831 18:25:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:01.831 18:25:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:01.831 18:25:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:01.831 18:25:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:01.831 18:25:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:01.831 18:25:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:07:01.831 18:25:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:02.091 [2024-07-23 18:25:01.908312] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:07:02.091 [2024-07-23 18:25:01.908458] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:07:02.091 18:25:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:07:02.091 18:25:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:02.091 18:25:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:02.091 18:25:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:02.091 00:07:02.091 real 0m0.153s 00:07:02.091 user 0m0.081s 00:07:02.091 sys 0m0.070s 00:07:02.091 18:25:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:02.091 18:25:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:07:02.091 ************************************ 00:07:02.091 END TEST skip_rpc_with_delay 00:07:02.091 ************************************ 00:07:02.091 18:25:02 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:07:02.091 18:25:02 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:07:02.091 18:25:02 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:07:02.091 18:25:02 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:02.092 18:25:02 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:02.092 18:25:02 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:02.092 ************************************ 00:07:02.092 START TEST exit_on_failed_rpc_init 00:07:02.092 ************************************ 00:07:02.092 18:25:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1121 -- # test_exit_on_failed_rpc_init 00:07:02.092 18:25:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:02.092 18:25:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=74652 00:07:02.092 18:25:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 74652 00:07:02.092 18:25:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@827 -- # '[' -z 74652 ']' 00:07:02.092 18:25:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.092 18:25:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:02.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.092 18:25:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.092 18:25:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:02.092 18:25:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:02.092 [2024-07-23 18:25:02.112402] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:02.092 [2024-07-23 18:25:02.112878] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74652 ] 00:07:02.351 [2024-07-23 18:25:02.244210] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.351 [2024-07-23 18:25:02.288499] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.921 18:25:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:02.921 18:25:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # return 0 00:07:02.921 18:25:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:02.921 18:25:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:02.921 18:25:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:07:02.921 18:25:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:02.921 18:25:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:02.921 18:25:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:02.921 18:25:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:02.921 18:25:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:02.921 18:25:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:02.921 18:25:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:02.921 18:25:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:02.921 18:25:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:07:02.921 18:25:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:03.181 [2024-07-23 18:25:03.002402] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:03.181 [2024-07-23 18:25:03.002528] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74669 ] 00:07:03.181 [2024-07-23 18:25:03.148010] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.181 [2024-07-23 18:25:03.198656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:03.181 [2024-07-23 18:25:03.198771] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:07:03.181 [2024-07-23 18:25:03.198799] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:07:03.181 [2024-07-23 18:25:03.198828] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:03.439 18:25:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:07:03.439 18:25:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:03.439 18:25:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:07:03.439 18:25:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:07:03.439 18:25:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:07:03.439 18:25:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:03.439 18:25:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:03.439 18:25:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 74652 00:07:03.439 18:25:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@946 -- # '[' -z 74652 ']' 00:07:03.439 18:25:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # kill -0 74652 00:07:03.439 18:25:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # uname 00:07:03.439 18:25:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:03.439 18:25:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74652 00:07:03.439 18:25:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:03.439 18:25:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:03.439 killing process with pid 74652 00:07:03.439 18:25:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74652' 00:07:03.439 18:25:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # kill 74652 00:07:03.439 18:25:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # wait 74652 00:07:03.698 00:07:03.698 real 0m1.684s 00:07:03.698 user 0m1.818s 00:07:03.698 sys 0m0.452s 00:07:03.698 18:25:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:03.698 18:25:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:03.698 ************************************ 00:07:03.698 END TEST exit_on_failed_rpc_init 00:07:03.698 ************************************ 00:07:03.957 18:25:03 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:03.957 ************************************ 00:07:03.957 END TEST skip_rpc 00:07:03.957 ************************************ 00:07:03.957 00:07:03.957 real 0m14.508s 00:07:03.957 user 0m13.487s 00:07:03.957 sys 0m1.760s 00:07:03.957 18:25:03 skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:03.957 18:25:03 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.957 18:25:03 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:07:03.957 18:25:03 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:03.957 18:25:03 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:03.957 18:25:03 -- common/autotest_common.sh@10 -- # set +x 00:07:03.957 ************************************ 00:07:03.957 START TEST rpc_client 00:07:03.957 ************************************ 00:07:03.957 18:25:03 rpc_client -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:07:03.957 * Looking for test storage... 00:07:03.957 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:07:03.957 18:25:03 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:07:03.957 OK 00:07:03.957 18:25:03 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:07:03.957 00:07:03.957 real 0m0.180s 00:07:03.957 user 0m0.083s 00:07:03.957 sys 0m0.104s 00:07:03.957 18:25:03 rpc_client -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:03.957 18:25:03 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:07:03.957 ************************************ 00:07:03.957 END TEST rpc_client 00:07:03.958 ************************************ 00:07:04.217 18:25:04 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:07:04.217 18:25:04 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:04.217 18:25:04 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:04.217 18:25:04 -- common/autotest_common.sh@10 -- # set +x 00:07:04.217 ************************************ 00:07:04.217 START TEST json_config 00:07:04.217 ************************************ 00:07:04.217 18:25:04 json_config -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:07:04.217 18:25:04 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:04.217 18:25:04 json_config -- nvmf/common.sh@7 -- # uname -s 00:07:04.217 18:25:04 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:04.217 18:25:04 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:04.217 18:25:04 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:04.217 18:25:04 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:04.217 18:25:04 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:04.217 18:25:04 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:04.217 18:25:04 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:04.217 18:25:04 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:04.217 18:25:04 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:04.217 18:25:04 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:04.217 18:25:04 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a665da6e-2bb8-44e2-a38e-0b9ae5ea0de5 00:07:04.217 18:25:04 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=a665da6e-2bb8-44e2-a38e-0b9ae5ea0de5 00:07:04.217 18:25:04 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:04.218 18:25:04 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:04.218 18:25:04 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:04.218 18:25:04 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:04.218 18:25:04 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:04.218 18:25:04 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:04.218 18:25:04 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:04.218 18:25:04 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:04.218 18:25:04 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.218 18:25:04 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.218 18:25:04 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.218 18:25:04 json_config -- paths/export.sh@5 -- # export PATH 00:07:04.218 18:25:04 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.218 18:25:04 json_config -- nvmf/common.sh@47 -- # : 0 00:07:04.218 18:25:04 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:04.218 18:25:04 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:04.218 18:25:04 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:04.218 18:25:04 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:04.218 18:25:04 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:04.218 18:25:04 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:04.218 18:25:04 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:04.218 18:25:04 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:04.218 18:25:04 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:07:04.218 18:25:04 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:07:04.218 18:25:04 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:07:04.218 18:25:04 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:07:04.218 WARNING: No tests are enabled so not running JSON configuration tests 00:07:04.218 18:25:04 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:07:04.218 18:25:04 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:07:04.218 18:25:04 json_config -- json_config/json_config.sh@28 -- # exit 0 00:07:04.218 00:07:04.218 real 0m0.104s 00:07:04.218 user 0m0.053s 00:07:04.218 sys 0m0.051s 00:07:04.218 18:25:04 json_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:04.218 18:25:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:04.218 ************************************ 00:07:04.218 END TEST json_config 00:07:04.218 ************************************ 00:07:04.218 18:25:04 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:07:04.218 18:25:04 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:04.218 18:25:04 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:04.218 18:25:04 -- common/autotest_common.sh@10 -- # set +x 00:07:04.218 ************************************ 00:07:04.218 START TEST json_config_extra_key 00:07:04.218 ************************************ 00:07:04.218 18:25:04 json_config_extra_key -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:07:04.478 18:25:04 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:04.478 18:25:04 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:07:04.478 18:25:04 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:04.478 18:25:04 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:04.478 18:25:04 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:04.478 18:25:04 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:04.478 18:25:04 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:04.478 18:25:04 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:04.478 18:25:04 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:04.478 18:25:04 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:04.478 18:25:04 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:04.478 18:25:04 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:04.478 18:25:04 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a665da6e-2bb8-44e2-a38e-0b9ae5ea0de5 00:07:04.478 18:25:04 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=a665da6e-2bb8-44e2-a38e-0b9ae5ea0de5 00:07:04.478 18:25:04 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:04.478 18:25:04 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:04.478 18:25:04 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:04.478 18:25:04 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:04.478 18:25:04 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:04.478 18:25:04 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:04.478 18:25:04 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:04.478 18:25:04 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:04.478 18:25:04 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.478 18:25:04 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.478 18:25:04 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.478 18:25:04 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:07:04.478 18:25:04 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.478 18:25:04 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:07:04.478 18:25:04 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:04.478 18:25:04 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:04.478 18:25:04 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:04.478 18:25:04 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:04.478 18:25:04 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:04.478 18:25:04 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:04.478 18:25:04 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:04.478 18:25:04 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:04.478 18:25:04 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:07:04.478 18:25:04 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:07:04.478 18:25:04 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:07:04.478 18:25:04 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:07:04.478 18:25:04 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:07:04.478 18:25:04 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:07:04.478 18:25:04 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:07:04.478 18:25:04 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:07:04.478 18:25:04 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:07:04.478 18:25:04 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:04.478 INFO: launching applications... 00:07:04.478 18:25:04 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:07:04.478 18:25:04 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:07:04.478 18:25:04 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:07:04.478 18:25:04 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:07:04.478 18:25:04 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:04.478 18:25:04 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:04.478 18:25:04 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:07:04.478 18:25:04 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:04.478 18:25:04 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:04.478 18:25:04 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=74827 00:07:04.478 Waiting for target to run... 00:07:04.478 18:25:04 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:04.478 18:25:04 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 74827 /var/tmp/spdk_tgt.sock 00:07:04.478 18:25:04 json_config_extra_key -- common/autotest_common.sh@827 -- # '[' -z 74827 ']' 00:07:04.478 18:25:04 json_config_extra_key -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:04.478 18:25:04 json_config_extra_key -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:04.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:04.478 18:25:04 json_config_extra_key -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:04.478 18:25:04 json_config_extra_key -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:04.478 18:25:04 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:04.478 18:25:04 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:07:04.478 [2024-07-23 18:25:04.383538] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:04.478 [2024-07-23 18:25:04.383674] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74827 ] 00:07:04.737 [2024-07-23 18:25:04.736664] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.737 [2024-07-23 18:25:04.769387] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.365 18:25:05 json_config_extra_key -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:05.365 18:25:05 json_config_extra_key -- common/autotest_common.sh@860 -- # return 0 00:07:05.365 00:07:05.365 18:25:05 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:07:05.365 INFO: shutting down applications... 00:07:05.365 18:25:05 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:07:05.365 18:25:05 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:07:05.365 18:25:05 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:07:05.365 18:25:05 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:05.365 18:25:05 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 74827 ]] 00:07:05.365 18:25:05 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 74827 00:07:05.365 18:25:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:05.366 18:25:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:05.366 18:25:05 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 74827 00:07:05.366 18:25:05 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:05.934 18:25:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:05.934 18:25:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:05.934 18:25:05 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 74827 00:07:05.934 18:25:05 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:05.934 18:25:05 json_config_extra_key -- json_config/common.sh@43 -- # break 00:07:05.934 18:25:05 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:05.934 SPDK target shutdown done 00:07:05.934 18:25:05 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:05.934 Success 00:07:05.934 18:25:05 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:07:05.934 00:07:05.934 real 0m1.479s 00:07:05.934 user 0m1.224s 00:07:05.934 sys 0m0.410s 00:07:05.934 18:25:05 json_config_extra_key -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:05.934 18:25:05 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:05.934 ************************************ 00:07:05.934 END TEST json_config_extra_key 00:07:05.934 ************************************ 00:07:05.935 18:25:05 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:05.935 18:25:05 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:05.935 18:25:05 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:05.935 18:25:05 -- common/autotest_common.sh@10 -- # set +x 00:07:05.935 ************************************ 00:07:05.935 START TEST alias_rpc 00:07:05.935 ************************************ 00:07:05.935 18:25:05 alias_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:05.935 * Looking for test storage... 00:07:05.935 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:07:05.935 18:25:05 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:05.935 18:25:05 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=74893 00:07:05.935 18:25:05 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 74893 00:07:05.935 18:25:05 alias_rpc -- common/autotest_common.sh@827 -- # '[' -z 74893 ']' 00:07:05.935 18:25:05 alias_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.935 18:25:05 alias_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:05.935 18:25:05 alias_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.935 18:25:05 alias_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:05.935 18:25:05 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.935 18:25:05 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:05.935 [2024-07-23 18:25:05.927952] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:05.935 [2024-07-23 18:25:05.928079] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74893 ] 00:07:06.194 [2024-07-23 18:25:06.074563] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.194 [2024-07-23 18:25:06.122631] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.762 18:25:06 alias_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:06.762 18:25:06 alias_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:06.762 18:25:06 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:07:07.021 18:25:06 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 74893 00:07:07.021 18:25:06 alias_rpc -- common/autotest_common.sh@946 -- # '[' -z 74893 ']' 00:07:07.021 18:25:06 alias_rpc -- common/autotest_common.sh@950 -- # kill -0 74893 00:07:07.021 18:25:06 alias_rpc -- common/autotest_common.sh@951 -- # uname 00:07:07.021 18:25:06 alias_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:07.021 18:25:06 alias_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74893 00:07:07.021 killing process with pid 74893 00:07:07.021 18:25:06 alias_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:07.021 18:25:06 alias_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:07.021 18:25:06 alias_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74893' 00:07:07.021 18:25:06 alias_rpc -- common/autotest_common.sh@965 -- # kill 74893 00:07:07.021 18:25:06 alias_rpc -- common/autotest_common.sh@970 -- # wait 74893 00:07:07.281 ************************************ 00:07:07.281 END TEST alias_rpc 00:07:07.281 ************************************ 00:07:07.281 00:07:07.281 real 0m1.573s 00:07:07.281 user 0m1.648s 00:07:07.281 sys 0m0.398s 00:07:07.281 18:25:07 alias_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:07.281 18:25:07 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.541 18:25:07 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:07:07.541 18:25:07 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:07:07.541 18:25:07 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:07.541 18:25:07 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:07.541 18:25:07 -- common/autotest_common.sh@10 -- # set +x 00:07:07.541 ************************************ 00:07:07.541 START TEST spdkcli_tcp 00:07:07.541 ************************************ 00:07:07.541 18:25:07 spdkcli_tcp -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:07:07.541 * Looking for test storage... 00:07:07.541 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:07:07.541 18:25:07 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:07:07.541 18:25:07 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:07:07.541 18:25:07 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:07:07.541 18:25:07 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:07:07.541 18:25:07 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:07:07.541 18:25:07 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:07:07.541 18:25:07 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:07:07.541 18:25:07 spdkcli_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:07.541 18:25:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:07.541 18:25:07 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=74964 00:07:07.541 18:25:07 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:07:07.541 18:25:07 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 74964 00:07:07.541 18:25:07 spdkcli_tcp -- common/autotest_common.sh@827 -- # '[' -z 74964 ']' 00:07:07.541 18:25:07 spdkcli_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.541 18:25:07 spdkcli_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:07.541 18:25:07 spdkcli_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.542 18:25:07 spdkcli_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:07.542 18:25:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:07.802 [2024-07-23 18:25:07.595391] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:07.802 [2024-07-23 18:25:07.595605] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74964 ] 00:07:07.802 [2024-07-23 18:25:07.741107] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:07.802 [2024-07-23 18:25:07.791112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.802 [2024-07-23 18:25:07.791141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:08.371 18:25:08 spdkcli_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:08.371 18:25:08 spdkcli_tcp -- common/autotest_common.sh@860 -- # return 0 00:07:08.371 18:25:08 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=74980 00:07:08.371 18:25:08 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:07:08.371 18:25:08 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:07:08.631 [ 00:07:08.631 "bdev_malloc_delete", 00:07:08.631 "bdev_malloc_create", 00:07:08.631 "bdev_null_resize", 00:07:08.631 "bdev_null_delete", 00:07:08.631 "bdev_null_create", 00:07:08.631 "bdev_nvme_cuse_unregister", 00:07:08.631 "bdev_nvme_cuse_register", 00:07:08.631 "bdev_opal_new_user", 00:07:08.631 "bdev_opal_set_lock_state", 00:07:08.631 "bdev_opal_delete", 00:07:08.631 "bdev_opal_get_info", 00:07:08.631 "bdev_opal_create", 00:07:08.631 "bdev_nvme_opal_revert", 00:07:08.631 "bdev_nvme_opal_init", 00:07:08.631 "bdev_nvme_send_cmd", 00:07:08.631 "bdev_nvme_get_path_iostat", 00:07:08.631 "bdev_nvme_get_mdns_discovery_info", 00:07:08.631 "bdev_nvme_stop_mdns_discovery", 00:07:08.631 "bdev_nvme_start_mdns_discovery", 00:07:08.631 "bdev_nvme_set_multipath_policy", 00:07:08.631 "bdev_nvme_set_preferred_path", 00:07:08.631 "bdev_nvme_get_io_paths", 00:07:08.631 "bdev_nvme_remove_error_injection", 00:07:08.631 "bdev_nvme_add_error_injection", 00:07:08.631 "bdev_nvme_get_discovery_info", 00:07:08.631 "bdev_nvme_stop_discovery", 00:07:08.631 "bdev_nvme_start_discovery", 00:07:08.631 "bdev_nvme_get_controller_health_info", 00:07:08.631 "bdev_nvme_disable_controller", 00:07:08.631 "bdev_nvme_enable_controller", 00:07:08.631 "bdev_nvme_reset_controller", 00:07:08.631 "bdev_nvme_get_transport_statistics", 00:07:08.631 "bdev_nvme_apply_firmware", 00:07:08.631 "bdev_nvme_detach_controller", 00:07:08.631 "bdev_nvme_get_controllers", 00:07:08.631 "bdev_nvme_attach_controller", 00:07:08.631 "bdev_nvme_set_hotplug", 00:07:08.631 "bdev_nvme_set_options", 00:07:08.631 "bdev_passthru_delete", 00:07:08.631 "bdev_passthru_create", 00:07:08.631 "bdev_lvol_set_parent_bdev", 00:07:08.631 "bdev_lvol_set_parent", 00:07:08.631 "bdev_lvol_check_shallow_copy", 00:07:08.631 "bdev_lvol_start_shallow_copy", 00:07:08.631 "bdev_lvol_grow_lvstore", 00:07:08.631 "bdev_lvol_get_lvols", 00:07:08.631 "bdev_lvol_get_lvstores", 00:07:08.631 "bdev_lvol_delete", 00:07:08.631 "bdev_lvol_set_read_only", 00:07:08.631 "bdev_lvol_resize", 00:07:08.631 "bdev_lvol_decouple_parent", 00:07:08.631 "bdev_lvol_inflate", 00:07:08.631 "bdev_lvol_rename", 00:07:08.631 "bdev_lvol_clone_bdev", 00:07:08.631 "bdev_lvol_clone", 00:07:08.631 "bdev_lvol_snapshot", 00:07:08.631 "bdev_lvol_create", 00:07:08.631 "bdev_lvol_delete_lvstore", 00:07:08.631 "bdev_lvol_rename_lvstore", 00:07:08.631 "bdev_lvol_create_lvstore", 00:07:08.631 "bdev_raid_set_options", 00:07:08.632 "bdev_raid_remove_base_bdev", 00:07:08.632 "bdev_raid_add_base_bdev", 00:07:08.632 "bdev_raid_delete", 00:07:08.632 "bdev_raid_create", 00:07:08.632 "bdev_raid_get_bdevs", 00:07:08.632 "bdev_error_inject_error", 00:07:08.632 "bdev_error_delete", 00:07:08.632 "bdev_error_create", 00:07:08.632 "bdev_split_delete", 00:07:08.632 "bdev_split_create", 00:07:08.632 "bdev_delay_delete", 00:07:08.632 "bdev_delay_create", 00:07:08.632 "bdev_delay_update_latency", 00:07:08.632 "bdev_zone_block_delete", 00:07:08.632 "bdev_zone_block_create", 00:07:08.632 "blobfs_create", 00:07:08.632 "blobfs_detect", 00:07:08.632 "blobfs_set_cache_size", 00:07:08.632 "bdev_xnvme_delete", 00:07:08.632 "bdev_xnvme_create", 00:07:08.632 "bdev_aio_delete", 00:07:08.632 "bdev_aio_rescan", 00:07:08.632 "bdev_aio_create", 00:07:08.632 "bdev_ftl_set_property", 00:07:08.632 "bdev_ftl_get_properties", 00:07:08.632 "bdev_ftl_get_stats", 00:07:08.632 "bdev_ftl_unmap", 00:07:08.632 "bdev_ftl_unload", 00:07:08.632 "bdev_ftl_delete", 00:07:08.632 "bdev_ftl_load", 00:07:08.632 "bdev_ftl_create", 00:07:08.632 "bdev_virtio_attach_controller", 00:07:08.632 "bdev_virtio_scsi_get_devices", 00:07:08.632 "bdev_virtio_detach_controller", 00:07:08.632 "bdev_virtio_blk_set_hotplug", 00:07:08.632 "bdev_iscsi_delete", 00:07:08.632 "bdev_iscsi_create", 00:07:08.632 "bdev_iscsi_set_options", 00:07:08.632 "accel_error_inject_error", 00:07:08.632 "ioat_scan_accel_module", 00:07:08.632 "dsa_scan_accel_module", 00:07:08.632 "iaa_scan_accel_module", 00:07:08.632 "keyring_file_remove_key", 00:07:08.632 "keyring_file_add_key", 00:07:08.632 "keyring_linux_set_options", 00:07:08.632 "iscsi_get_histogram", 00:07:08.632 "iscsi_enable_histogram", 00:07:08.632 "iscsi_set_options", 00:07:08.632 "iscsi_get_auth_groups", 00:07:08.632 "iscsi_auth_group_remove_secret", 00:07:08.632 "iscsi_auth_group_add_secret", 00:07:08.632 "iscsi_delete_auth_group", 00:07:08.632 "iscsi_create_auth_group", 00:07:08.632 "iscsi_set_discovery_auth", 00:07:08.632 "iscsi_get_options", 00:07:08.632 "iscsi_target_node_request_logout", 00:07:08.632 "iscsi_target_node_set_redirect", 00:07:08.632 "iscsi_target_node_set_auth", 00:07:08.632 "iscsi_target_node_add_lun", 00:07:08.632 "iscsi_get_stats", 00:07:08.632 "iscsi_get_connections", 00:07:08.632 "iscsi_portal_group_set_auth", 00:07:08.632 "iscsi_start_portal_group", 00:07:08.632 "iscsi_delete_portal_group", 00:07:08.632 "iscsi_create_portal_group", 00:07:08.632 "iscsi_get_portal_groups", 00:07:08.632 "iscsi_delete_target_node", 00:07:08.632 "iscsi_target_node_remove_pg_ig_maps", 00:07:08.632 "iscsi_target_node_add_pg_ig_maps", 00:07:08.632 "iscsi_create_target_node", 00:07:08.632 "iscsi_get_target_nodes", 00:07:08.632 "iscsi_delete_initiator_group", 00:07:08.632 "iscsi_initiator_group_remove_initiators", 00:07:08.632 "iscsi_initiator_group_add_initiators", 00:07:08.632 "iscsi_create_initiator_group", 00:07:08.632 "iscsi_get_initiator_groups", 00:07:08.632 "nvmf_set_crdt", 00:07:08.632 "nvmf_set_config", 00:07:08.632 "nvmf_set_max_subsystems", 00:07:08.632 "nvmf_stop_mdns_prr", 00:07:08.632 "nvmf_publish_mdns_prr", 00:07:08.632 "nvmf_subsystem_get_listeners", 00:07:08.632 "nvmf_subsystem_get_qpairs", 00:07:08.632 "nvmf_subsystem_get_controllers", 00:07:08.632 "nvmf_get_stats", 00:07:08.632 "nvmf_get_transports", 00:07:08.632 "nvmf_create_transport", 00:07:08.632 "nvmf_get_targets", 00:07:08.632 "nvmf_delete_target", 00:07:08.632 "nvmf_create_target", 00:07:08.632 "nvmf_subsystem_allow_any_host", 00:07:08.632 "nvmf_subsystem_remove_host", 00:07:08.632 "nvmf_subsystem_add_host", 00:07:08.632 "nvmf_ns_remove_host", 00:07:08.632 "nvmf_ns_add_host", 00:07:08.632 "nvmf_subsystem_remove_ns", 00:07:08.632 "nvmf_subsystem_add_ns", 00:07:08.632 "nvmf_subsystem_listener_set_ana_state", 00:07:08.632 "nvmf_discovery_get_referrals", 00:07:08.632 "nvmf_discovery_remove_referral", 00:07:08.632 "nvmf_discovery_add_referral", 00:07:08.632 "nvmf_subsystem_remove_listener", 00:07:08.632 "nvmf_subsystem_add_listener", 00:07:08.632 "nvmf_delete_subsystem", 00:07:08.632 "nvmf_create_subsystem", 00:07:08.632 "nvmf_get_subsystems", 00:07:08.632 "env_dpdk_get_mem_stats", 00:07:08.632 "nbd_get_disks", 00:07:08.632 "nbd_stop_disk", 00:07:08.632 "nbd_start_disk", 00:07:08.632 "ublk_recover_disk", 00:07:08.632 "ublk_get_disks", 00:07:08.632 "ublk_stop_disk", 00:07:08.632 "ublk_start_disk", 00:07:08.632 "ublk_destroy_target", 00:07:08.632 "ublk_create_target", 00:07:08.632 "virtio_blk_create_transport", 00:07:08.632 "virtio_blk_get_transports", 00:07:08.632 "vhost_controller_set_coalescing", 00:07:08.632 "vhost_get_controllers", 00:07:08.632 "vhost_delete_controller", 00:07:08.632 "vhost_create_blk_controller", 00:07:08.632 "vhost_scsi_controller_remove_target", 00:07:08.632 "vhost_scsi_controller_add_target", 00:07:08.632 "vhost_start_scsi_controller", 00:07:08.632 "vhost_create_scsi_controller", 00:07:08.632 "thread_set_cpumask", 00:07:08.632 "framework_get_scheduler", 00:07:08.632 "framework_set_scheduler", 00:07:08.632 "framework_get_reactors", 00:07:08.632 "thread_get_io_channels", 00:07:08.632 "thread_get_pollers", 00:07:08.632 "thread_get_stats", 00:07:08.632 "framework_monitor_context_switch", 00:07:08.632 "spdk_kill_instance", 00:07:08.632 "log_enable_timestamps", 00:07:08.632 "log_get_flags", 00:07:08.632 "log_clear_flag", 00:07:08.632 "log_set_flag", 00:07:08.632 "log_get_level", 00:07:08.632 "log_set_level", 00:07:08.632 "log_get_print_level", 00:07:08.632 "log_set_print_level", 00:07:08.632 "framework_enable_cpumask_locks", 00:07:08.632 "framework_disable_cpumask_locks", 00:07:08.632 "framework_wait_init", 00:07:08.632 "framework_start_init", 00:07:08.632 "scsi_get_devices", 00:07:08.632 "bdev_get_histogram", 00:07:08.632 "bdev_enable_histogram", 00:07:08.632 "bdev_set_qos_limit", 00:07:08.632 "bdev_set_qd_sampling_period", 00:07:08.632 "bdev_get_bdevs", 00:07:08.632 "bdev_reset_iostat", 00:07:08.632 "bdev_get_iostat", 00:07:08.632 "bdev_examine", 00:07:08.632 "bdev_wait_for_examine", 00:07:08.632 "bdev_set_options", 00:07:08.632 "notify_get_notifications", 00:07:08.632 "notify_get_types", 00:07:08.632 "accel_get_stats", 00:07:08.632 "accel_set_options", 00:07:08.632 "accel_set_driver", 00:07:08.632 "accel_crypto_key_destroy", 00:07:08.632 "accel_crypto_keys_get", 00:07:08.632 "accel_crypto_key_create", 00:07:08.632 "accel_assign_opc", 00:07:08.632 "accel_get_module_info", 00:07:08.632 "accel_get_opc_assignments", 00:07:08.632 "vmd_rescan", 00:07:08.632 "vmd_remove_device", 00:07:08.632 "vmd_enable", 00:07:08.632 "sock_get_default_impl", 00:07:08.632 "sock_set_default_impl", 00:07:08.632 "sock_impl_set_options", 00:07:08.632 "sock_impl_get_options", 00:07:08.632 "iobuf_get_stats", 00:07:08.632 "iobuf_set_options", 00:07:08.632 "framework_get_pci_devices", 00:07:08.632 "framework_get_config", 00:07:08.632 "framework_get_subsystems", 00:07:08.632 "trace_get_info", 00:07:08.632 "trace_get_tpoint_group_mask", 00:07:08.632 "trace_disable_tpoint_group", 00:07:08.632 "trace_enable_tpoint_group", 00:07:08.632 "trace_clear_tpoint_mask", 00:07:08.632 "trace_set_tpoint_mask", 00:07:08.632 "keyring_get_keys", 00:07:08.632 "spdk_get_version", 00:07:08.632 "rpc_get_methods" 00:07:08.632 ] 00:07:08.632 18:25:08 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:07:08.632 18:25:08 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:08.632 18:25:08 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:08.632 18:25:08 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:07:08.632 18:25:08 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 74964 00:07:08.632 18:25:08 spdkcli_tcp -- common/autotest_common.sh@946 -- # '[' -z 74964 ']' 00:07:08.632 18:25:08 spdkcli_tcp -- common/autotest_common.sh@950 -- # kill -0 74964 00:07:08.632 18:25:08 spdkcli_tcp -- common/autotest_common.sh@951 -- # uname 00:07:08.632 18:25:08 spdkcli_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:08.632 18:25:08 spdkcli_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74964 00:07:08.632 18:25:08 spdkcli_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:08.632 killing process with pid 74964 00:07:08.632 18:25:08 spdkcli_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:08.632 18:25:08 spdkcli_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74964' 00:07:08.632 18:25:08 spdkcli_tcp -- common/autotest_common.sh@965 -- # kill 74964 00:07:08.632 18:25:08 spdkcli_tcp -- common/autotest_common.sh@970 -- # wait 74964 00:07:09.202 ************************************ 00:07:09.202 END TEST spdkcli_tcp 00:07:09.202 ************************************ 00:07:09.202 00:07:09.202 real 0m1.632s 00:07:09.202 user 0m2.704s 00:07:09.202 sys 0m0.516s 00:07:09.202 18:25:09 spdkcli_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:09.202 18:25:09 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:09.202 18:25:09 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:09.202 18:25:09 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:09.202 18:25:09 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:09.202 18:25:09 -- common/autotest_common.sh@10 -- # set +x 00:07:09.202 ************************************ 00:07:09.202 START TEST dpdk_mem_utility 00:07:09.202 ************************************ 00:07:09.202 18:25:09 dpdk_mem_utility -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:09.202 * Looking for test storage... 00:07:09.202 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:07:09.202 18:25:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:09.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.202 18:25:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=75051 00:07:09.202 18:25:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:09.202 18:25:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 75051 00:07:09.202 18:25:09 dpdk_mem_utility -- common/autotest_common.sh@827 -- # '[' -z 75051 ']' 00:07:09.202 18:25:09 dpdk_mem_utility -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.202 18:25:09 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:09.202 18:25:09 dpdk_mem_utility -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.202 18:25:09 dpdk_mem_utility -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:09.202 18:25:09 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:09.462 [2024-07-23 18:25:09.280937] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:09.462 [2024-07-23 18:25:09.281059] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75051 ] 00:07:09.462 [2024-07-23 18:25:09.418068] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.462 [2024-07-23 18:25:09.466940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.421 18:25:10 dpdk_mem_utility -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:10.421 18:25:10 dpdk_mem_utility -- common/autotest_common.sh@860 -- # return 0 00:07:10.421 18:25:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:07:10.421 18:25:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:07:10.421 18:25:10 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.421 18:25:10 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:10.421 { 00:07:10.421 "filename": "/tmp/spdk_mem_dump.txt" 00:07:10.421 } 00:07:10.421 18:25:10 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.421 18:25:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:10.421 DPDK memory size 814.000000 MiB in 1 heap(s) 00:07:10.421 1 heaps totaling size 814.000000 MiB 00:07:10.421 size: 814.000000 MiB heap id: 0 00:07:10.421 end heaps---------- 00:07:10.421 8 mempools totaling size 598.116089 MiB 00:07:10.421 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:07:10.421 size: 158.602051 MiB name: PDU_data_out_Pool 00:07:10.421 size: 84.521057 MiB name: bdev_io_75051 00:07:10.421 size: 51.011292 MiB name: evtpool_75051 00:07:10.421 size: 50.003479 MiB name: msgpool_75051 00:07:10.421 size: 21.763794 MiB name: PDU_Pool 00:07:10.421 size: 19.513306 MiB name: SCSI_TASK_Pool 00:07:10.421 size: 0.026123 MiB name: Session_Pool 00:07:10.421 end mempools------- 00:07:10.421 6 memzones totaling size 4.142822 MiB 00:07:10.421 size: 1.000366 MiB name: RG_ring_0_75051 00:07:10.421 size: 1.000366 MiB name: RG_ring_1_75051 00:07:10.421 size: 1.000366 MiB name: RG_ring_4_75051 00:07:10.421 size: 1.000366 MiB name: RG_ring_5_75051 00:07:10.421 size: 0.125366 MiB name: RG_ring_2_75051 00:07:10.421 size: 0.015991 MiB name: RG_ring_3_75051 00:07:10.421 end memzones------- 00:07:10.421 18:25:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:07:10.421 heap id: 0 total size: 814.000000 MiB number of busy elements: 298 number of free elements: 15 00:07:10.421 list of free elements. size: 12.472290 MiB 00:07:10.421 element at address: 0x200000400000 with size: 1.999512 MiB 00:07:10.421 element at address: 0x200018e00000 with size: 0.999878 MiB 00:07:10.421 element at address: 0x200019000000 with size: 0.999878 MiB 00:07:10.421 element at address: 0x200003e00000 with size: 0.996277 MiB 00:07:10.421 element at address: 0x200031c00000 with size: 0.994446 MiB 00:07:10.421 element at address: 0x200013800000 with size: 0.978699 MiB 00:07:10.421 element at address: 0x200007000000 with size: 0.959839 MiB 00:07:10.421 element at address: 0x200019200000 with size: 0.936584 MiB 00:07:10.421 element at address: 0x200000200000 with size: 0.833191 MiB 00:07:10.421 element at address: 0x20001aa00000 with size: 0.568787 MiB 00:07:10.421 element at address: 0x20000b200000 with size: 0.489624 MiB 00:07:10.421 element at address: 0x200000800000 with size: 0.486145 MiB 00:07:10.421 element at address: 0x200019400000 with size: 0.485657 MiB 00:07:10.421 element at address: 0x200027e00000 with size: 0.395935 MiB 00:07:10.421 element at address: 0x200003a00000 with size: 0.347839 MiB 00:07:10.421 list of standard malloc elements. size: 199.265137 MiB 00:07:10.421 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:07:10.421 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:07:10.421 element at address: 0x200018efff80 with size: 1.000122 MiB 00:07:10.421 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:07:10.421 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:07:10.421 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:07:10.421 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:07:10.421 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:07:10.421 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:07:10.422 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:07:10.422 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:07:10.422 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:07:10.422 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:07:10.422 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:07:10.422 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:07:10.422 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:07:10.422 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:07:10.422 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:07:10.422 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:07:10.422 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:07:10.422 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:07:10.422 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:07:10.422 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:07:10.422 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:07:10.422 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:07:10.422 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:07:10.422 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:07:10.422 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:07:10.422 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:07:10.422 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:07:10.422 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:07:10.422 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:07:10.422 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:07:10.422 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:07:10.422 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:07:10.422 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:07:10.422 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:07:10.422 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:07:10.422 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:07:10.422 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:07:10.422 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:07:10.422 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:07:10.422 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:07:10.422 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:07:10.422 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:07:10.422 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:07:10.422 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:07:10.422 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:07:10.422 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:07:10.422 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:07:10.422 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:07:10.422 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:07:10.422 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:07:10.422 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:07:10.422 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:07:10.422 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:07:10.422 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:07:10.422 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:07:10.422 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:07:10.422 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:07:10.422 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:07:10.422 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:07:10.422 element at address: 0x20000087c740 with size: 0.000183 MiB 00:07:10.422 element at address: 0x20000087c800 with size: 0.000183 MiB 00:07:10.422 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:07:10.422 element at address: 0x20000087c980 with size: 0.000183 MiB 00:07:10.422 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:07:10.422 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:07:10.422 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:07:10.422 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:07:10.422 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:07:10.422 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:07:10.422 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:07:10.422 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:07:10.422 element at address: 0x200003a590c0 with size: 0.000183 MiB 00:07:10.422 element at address: 0x200003a59180 with size: 0.000183 MiB 00:07:10.422 element at address: 0x200003a59240 with size: 0.000183 MiB 00:07:10.422 element at address: 0x200003a59300 with size: 0.000183 MiB 00:07:10.422 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:07:10.422 element at address: 0x200003a59480 with size: 0.000183 MiB 00:07:10.422 element at address: 0x200003a59540 with size: 0.000183 MiB 00:07:10.422 element at address: 0x200003a59600 with size: 0.000183 MiB 00:07:10.422 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:07:10.422 element at address: 0x200003a59780 with size: 0.000183 MiB 00:07:10.422 element at address: 0x200003a59840 with size: 0.000183 MiB 00:07:10.422 element at address: 0x200003a59900 with size: 0.000183 MiB 00:07:10.422 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:07:10.422 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:07:10.422 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:07:10.422 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:07:10.422 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:07:10.422 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:07:10.422 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:07:10.422 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:07:10.422 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:07:10.422 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:07:10.422 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:07:10.422 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:07:10.422 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:07:10.422 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:07:10.422 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:07:10.422 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:07:10.422 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:07:10.422 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:07:10.422 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:07:10.422 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:07:10.422 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:07:10.422 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:07:10.422 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:07:10.422 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:07:10.422 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:07:10.422 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:07:10.422 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:07:10.422 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:07:10.422 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:07:10.422 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:07:10.422 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:07:10.422 element at address: 0x200003adb300 with size: 0.000183 MiB 00:07:10.422 element at address: 0x200003adb500 with size: 0.000183 MiB 00:07:10.422 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:07:10.422 element at address: 0x200003affa80 with size: 0.000183 MiB 00:07:10.422 element at address: 0x200003affb40 with size: 0.000183 MiB 00:07:10.422 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:07:10.422 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:07:10.422 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:07:10.422 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:07:10.422 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:07:10.422 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:07:10.422 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:07:10.422 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:07:10.422 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:07:10.422 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:07:10.422 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:07:10.422 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:07:10.422 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:07:10.422 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:07:10.422 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:07:10.422 element at address: 0x20001aa919c0 with size: 0.000183 MiB 00:07:10.422 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:07:10.422 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:07:10.422 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:07:10.422 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:07:10.422 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:07:10.422 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:07:10.422 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:07:10.422 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:07:10.422 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:07:10.422 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:07:10.422 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:07:10.422 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:07:10.422 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:07:10.422 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:07:10.422 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:07:10.422 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:07:10.422 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:07:10.422 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:07:10.422 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:07:10.422 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:07:10.423 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:07:10.423 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:07:10.423 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:07:10.423 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:07:10.423 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:07:10.423 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:07:10.423 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:07:10.423 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:07:10.423 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:07:10.423 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:07:10.423 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:07:10.423 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:07:10.423 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:07:10.423 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:07:10.423 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:07:10.423 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:07:10.423 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:07:10.423 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:07:10.423 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:07:10.423 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:07:10.423 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:07:10.423 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:07:10.423 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:07:10.423 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:07:10.423 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:07:10.423 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:07:10.423 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:07:10.423 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:07:10.423 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:07:10.423 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:07:10.423 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:07:10.423 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:07:10.423 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:07:10.423 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:07:10.423 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:07:10.423 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:07:10.423 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:07:10.423 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:07:10.423 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:07:10.423 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:07:10.423 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:07:10.423 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:07:10.423 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:07:10.423 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:07:10.423 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:07:10.423 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:07:10.423 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:07:10.423 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:07:10.423 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:07:10.423 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:07:10.423 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:07:10.423 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:07:10.423 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:07:10.423 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:07:10.423 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:07:10.423 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:07:10.423 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:07:10.423 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:07:10.423 element at address: 0x200027e655c0 with size: 0.000183 MiB 00:07:10.423 element at address: 0x200027e65680 with size: 0.000183 MiB 00:07:10.423 element at address: 0x200027e6c280 with size: 0.000183 MiB 00:07:10.423 element at address: 0x200027e6c480 with size: 0.000183 MiB 00:07:10.423 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:07:10.423 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:07:10.423 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:07:10.423 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:07:10.423 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:07:10.423 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:07:10.423 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:07:10.423 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:07:10.423 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:07:10.423 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:07:10.423 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:07:10.423 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:07:10.423 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:07:10.423 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:07:10.423 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:07:10.423 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:07:10.423 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:07:10.423 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:07:10.423 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:07:10.423 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:07:10.423 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:07:10.423 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:07:10.423 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:07:10.423 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:07:10.423 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:07:10.423 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:07:10.423 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:07:10.423 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:07:10.423 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:07:10.423 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:07:10.423 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:07:10.423 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:07:10.423 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:07:10.423 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:07:10.423 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:07:10.423 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:07:10.423 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:07:10.423 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:07:10.423 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:07:10.423 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:07:10.423 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:07:10.423 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:07:10.423 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:07:10.423 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:07:10.423 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:07:10.423 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:07:10.423 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:07:10.423 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:07:10.423 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:07:10.423 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:07:10.423 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:07:10.423 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:07:10.423 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:07:10.423 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:07:10.423 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:07:10.423 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:07:10.423 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:07:10.423 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:07:10.423 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:07:10.423 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:07:10.423 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:07:10.423 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:07:10.423 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:07:10.423 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:07:10.423 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:07:10.423 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:07:10.423 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:07:10.423 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:07:10.423 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:07:10.423 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:07:10.423 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:07:10.423 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:07:10.423 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:07:10.423 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:07:10.423 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:07:10.423 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:07:10.423 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:07:10.423 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:07:10.423 list of memzone associated elements. size: 602.262573 MiB 00:07:10.423 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:07:10.423 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:07:10.423 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:07:10.423 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:07:10.423 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:07:10.423 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_75051_0 00:07:10.423 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:07:10.424 associated memzone info: size: 48.002930 MiB name: MP_evtpool_75051_0 00:07:10.424 element at address: 0x200003fff380 with size: 48.003052 MiB 00:07:10.424 associated memzone info: size: 48.002930 MiB name: MP_msgpool_75051_0 00:07:10.424 element at address: 0x2000195be940 with size: 20.255554 MiB 00:07:10.424 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:07:10.424 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:07:10.424 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:07:10.424 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:07:10.424 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_75051 00:07:10.424 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:07:10.424 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_75051 00:07:10.424 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:07:10.424 associated memzone info: size: 1.007996 MiB name: MP_evtpool_75051 00:07:10.424 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:07:10.424 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:07:10.424 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:07:10.424 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:07:10.424 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:07:10.424 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:07:10.424 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:07:10.424 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:07:10.424 element at address: 0x200003eff180 with size: 1.000488 MiB 00:07:10.424 associated memzone info: size: 1.000366 MiB name: RG_ring_0_75051 00:07:10.424 element at address: 0x200003affc00 with size: 1.000488 MiB 00:07:10.424 associated memzone info: size: 1.000366 MiB name: RG_ring_1_75051 00:07:10.424 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:07:10.424 associated memzone info: size: 1.000366 MiB name: RG_ring_4_75051 00:07:10.424 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:07:10.424 associated memzone info: size: 1.000366 MiB name: RG_ring_5_75051 00:07:10.424 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:07:10.424 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_75051 00:07:10.424 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:07:10.424 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:07:10.424 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:07:10.424 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:07:10.424 element at address: 0x20001947c540 with size: 0.250488 MiB 00:07:10.424 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:07:10.424 element at address: 0x200003adf880 with size: 0.125488 MiB 00:07:10.424 associated memzone info: size: 0.125366 MiB name: RG_ring_2_75051 00:07:10.424 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:07:10.424 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:07:10.424 element at address: 0x200027e65740 with size: 0.023743 MiB 00:07:10.424 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:07:10.424 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:07:10.424 associated memzone info: size: 0.015991 MiB name: RG_ring_3_75051 00:07:10.424 element at address: 0x200027e6b880 with size: 0.002441 MiB 00:07:10.424 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:07:10.424 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:07:10.424 associated memzone info: size: 0.000183 MiB name: MP_msgpool_75051 00:07:10.424 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:07:10.424 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_75051 00:07:10.424 element at address: 0x200027e6c340 with size: 0.000305 MiB 00:07:10.424 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:07:10.424 18:25:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:07:10.424 18:25:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 75051 00:07:10.424 18:25:10 dpdk_mem_utility -- common/autotest_common.sh@946 -- # '[' -z 75051 ']' 00:07:10.424 18:25:10 dpdk_mem_utility -- common/autotest_common.sh@950 -- # kill -0 75051 00:07:10.424 18:25:10 dpdk_mem_utility -- common/autotest_common.sh@951 -- # uname 00:07:10.424 18:25:10 dpdk_mem_utility -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:10.424 18:25:10 dpdk_mem_utility -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75051 00:07:10.424 18:25:10 dpdk_mem_utility -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:10.424 18:25:10 dpdk_mem_utility -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:10.424 18:25:10 dpdk_mem_utility -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75051' 00:07:10.424 killing process with pid 75051 00:07:10.424 18:25:10 dpdk_mem_utility -- common/autotest_common.sh@965 -- # kill 75051 00:07:10.424 18:25:10 dpdk_mem_utility -- common/autotest_common.sh@970 -- # wait 75051 00:07:10.682 00:07:10.682 real 0m1.563s 00:07:10.682 user 0m1.560s 00:07:10.682 sys 0m0.444s 00:07:10.682 18:25:10 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:10.682 18:25:10 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:10.682 ************************************ 00:07:10.682 END TEST dpdk_mem_utility 00:07:10.682 ************************************ 00:07:10.682 18:25:10 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:10.682 18:25:10 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:10.682 18:25:10 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:10.683 18:25:10 -- common/autotest_common.sh@10 -- # set +x 00:07:10.683 ************************************ 00:07:10.683 START TEST event 00:07:10.683 ************************************ 00:07:10.683 18:25:10 event -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:10.943 * Looking for test storage... 00:07:10.943 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:10.943 18:25:10 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:10.943 18:25:10 event -- bdev/nbd_common.sh@6 -- # set -e 00:07:10.943 18:25:10 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:10.943 18:25:10 event -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:10.943 18:25:10 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:10.943 18:25:10 event -- common/autotest_common.sh@10 -- # set +x 00:07:10.943 ************************************ 00:07:10.943 START TEST event_perf 00:07:10.943 ************************************ 00:07:10.943 18:25:10 event.event_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:10.943 Running I/O for 1 seconds...[2024-07-23 18:25:10.877345] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:10.943 [2024-07-23 18:25:10.877542] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75123 ] 00:07:11.203 [2024-07-23 18:25:11.009138] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:11.203 [2024-07-23 18:25:11.063512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:11.203 [2024-07-23 18:25:11.063710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:11.203 [2024-07-23 18:25:11.063741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.203 Running I/O for 1 seconds...[2024-07-23 18:25:11.063896] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:12.143 00:07:12.143 lcore 0: 193321 00:07:12.143 lcore 1: 193321 00:07:12.143 lcore 2: 193321 00:07:12.143 lcore 3: 193321 00:07:12.143 done. 00:07:12.143 00:07:12.143 real 0m1.323s 00:07:12.143 user 0m4.104s 00:07:12.143 sys 0m0.099s 00:07:12.143 ************************************ 00:07:12.143 END TEST event_perf 00:07:12.143 ************************************ 00:07:12.143 18:25:12 event.event_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:12.143 18:25:12 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:07:12.402 18:25:12 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:12.402 18:25:12 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:12.402 18:25:12 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:12.402 18:25:12 event -- common/autotest_common.sh@10 -- # set +x 00:07:12.402 ************************************ 00:07:12.402 START TEST event_reactor 00:07:12.402 ************************************ 00:07:12.402 18:25:12 event.event_reactor -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:12.402 [2024-07-23 18:25:12.261995] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:12.402 [2024-07-23 18:25:12.262141] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75163 ] 00:07:12.402 [2024-07-23 18:25:12.397225] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.402 [2024-07-23 18:25:12.440949] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.783 test_start 00:07:13.783 oneshot 00:07:13.783 tick 100 00:07:13.783 tick 100 00:07:13.783 tick 250 00:07:13.783 tick 100 00:07:13.783 tick 100 00:07:13.783 tick 100 00:07:13.783 tick 250 00:07:13.783 tick 500 00:07:13.783 tick 100 00:07:13.783 tick 100 00:07:13.783 tick 250 00:07:13.783 tick 100 00:07:13.783 tick 100 00:07:13.783 test_end 00:07:13.783 00:07:13.783 real 0m1.312s 00:07:13.783 user 0m1.131s 00:07:13.783 sys 0m0.074s 00:07:13.783 18:25:13 event.event_reactor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:13.783 18:25:13 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:07:13.783 ************************************ 00:07:13.783 END TEST event_reactor 00:07:13.783 ************************************ 00:07:13.783 18:25:13 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:13.783 18:25:13 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:13.783 18:25:13 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:13.783 18:25:13 event -- common/autotest_common.sh@10 -- # set +x 00:07:13.783 ************************************ 00:07:13.783 START TEST event_reactor_perf 00:07:13.783 ************************************ 00:07:13.783 18:25:13 event.event_reactor_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:13.783 [2024-07-23 18:25:13.637939] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:13.783 [2024-07-23 18:25:13.638069] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75196 ] 00:07:13.783 [2024-07-23 18:25:13.780679] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.783 [2024-07-23 18:25:13.830431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.165 test_start 00:07:15.165 test_end 00:07:15.165 Performance: 373483 events per second 00:07:15.165 00:07:15.165 real 0m1.323s 00:07:15.165 user 0m1.139s 00:07:15.165 sys 0m0.077s 00:07:15.165 ************************************ 00:07:15.165 END TEST event_reactor_perf 00:07:15.165 ************************************ 00:07:15.165 18:25:14 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:15.165 18:25:14 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:07:15.165 18:25:14 event -- event/event.sh@49 -- # uname -s 00:07:15.165 18:25:14 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:15.165 18:25:14 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:15.165 18:25:14 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:15.165 18:25:14 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:15.165 18:25:14 event -- common/autotest_common.sh@10 -- # set +x 00:07:15.165 ************************************ 00:07:15.165 START TEST event_scheduler 00:07:15.165 ************************************ 00:07:15.165 18:25:14 event.event_scheduler -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:15.165 * Looking for test storage... 00:07:15.165 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:07:15.165 18:25:15 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:15.165 18:25:15 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=75265 00:07:15.165 18:25:15 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:15.165 18:25:15 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:15.165 18:25:15 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 75265 00:07:15.165 18:25:15 event.event_scheduler -- common/autotest_common.sh@827 -- # '[' -z 75265 ']' 00:07:15.165 18:25:15 event.event_scheduler -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.165 18:25:15 event.event_scheduler -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:15.165 18:25:15 event.event_scheduler -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.165 18:25:15 event.event_scheduler -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:15.165 18:25:15 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:15.165 [2024-07-23 18:25:15.195199] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:15.165 [2024-07-23 18:25:15.195842] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75265 ] 00:07:15.425 [2024-07-23 18:25:15.344481] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:15.425 [2024-07-23 18:25:15.399541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.425 [2024-07-23 18:25:15.399714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:15.425 [2024-07-23 18:25:15.399935] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:15.425 [2024-07-23 18:25:15.400041] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:16.008 18:25:16 event.event_scheduler -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:16.008 18:25:16 event.event_scheduler -- common/autotest_common.sh@860 -- # return 0 00:07:16.008 18:25:16 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:16.008 18:25:16 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.008 18:25:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:16.008 POWER: Env isn't set yet! 00:07:16.008 POWER: Attempting to initialise ACPI cpufreq power management... 00:07:16.008 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:16.008 POWER: Cannot set governor of lcore 0 to userspace 00:07:16.008 POWER: Attempting to initialise PSTAT power management... 00:07:16.008 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:16.008 POWER: Cannot set governor of lcore 0 to performance 00:07:16.008 POWER: Attempting to initialise CPPC power management... 00:07:16.008 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:16.008 POWER: Cannot set governor of lcore 0 to userspace 00:07:16.008 POWER: Attempting to initialise VM power management... 00:07:16.008 GUEST_CHANNEL: Unable to to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:07:16.008 POWER: Unable to set Power Management Environment for lcore 0 00:07:16.008 [2024-07-23 18:25:16.012339] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:07:16.008 [2024-07-23 18:25:16.012358] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:07:16.008 [2024-07-23 18:25:16.012384] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:07:16.008 [2024-07-23 18:25:16.012410] scheduler_dynamic.c: 382:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:07:16.008 [2024-07-23 18:25:16.012434] scheduler_dynamic.c: 384:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:07:16.008 [2024-07-23 18:25:16.012442] scheduler_dynamic.c: 386:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:07:16.008 18:25:16 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.008 18:25:16 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:16.008 18:25:16 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.008 18:25:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:16.267 [2024-07-23 18:25:16.086290] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:16.267 18:25:16 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.267 18:25:16 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:16.267 18:25:16 event.event_scheduler -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:16.267 18:25:16 event.event_scheduler -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:16.267 18:25:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:16.267 ************************************ 00:07:16.267 START TEST scheduler_create_thread 00:07:16.267 ************************************ 00:07:16.267 18:25:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1121 -- # scheduler_create_thread 00:07:16.268 18:25:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:16.268 18:25:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.268 18:25:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:16.268 2 00:07:16.268 18:25:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.268 18:25:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:16.268 18:25:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.268 18:25:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:16.268 3 00:07:16.268 18:25:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.268 18:25:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:16.268 18:25:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.268 18:25:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:16.268 4 00:07:16.268 18:25:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.268 18:25:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:16.268 18:25:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.268 18:25:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:16.268 5 00:07:16.268 18:25:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.268 18:25:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:16.268 18:25:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.268 18:25:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:16.268 6 00:07:16.268 18:25:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.268 18:25:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:16.268 18:25:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.268 18:25:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:16.268 7 00:07:16.268 18:25:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.268 18:25:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:16.268 18:25:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.268 18:25:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:16.268 8 00:07:16.268 18:25:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.268 18:25:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:16.268 18:25:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.268 18:25:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:16.268 9 00:07:16.268 18:25:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.268 18:25:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:16.268 18:25:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.268 18:25:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:16.268 10 00:07:16.268 18:25:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.268 18:25:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:16.268 18:25:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.268 18:25:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:16.268 18:25:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.268 18:25:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:16.268 18:25:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:16.268 18:25:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.268 18:25:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:16.839 18:25:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.839 18:25:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:16.839 18:25:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.839 18:25:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:18.218 18:25:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:18.218 18:25:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:18.218 18:25:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:18.218 18:25:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:18.218 18:25:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:19.156 ************************************ 00:07:19.156 END TEST scheduler_create_thread 00:07:19.156 ************************************ 00:07:19.156 18:25:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.156 00:07:19.156 real 0m3.092s 00:07:19.156 user 0m0.014s 00:07:19.156 sys 0m0.010s 00:07:19.156 18:25:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:19.156 18:25:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:19.415 18:25:19 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:19.415 18:25:19 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 75265 00:07:19.415 18:25:19 event.event_scheduler -- common/autotest_common.sh@946 -- # '[' -z 75265 ']' 00:07:19.415 18:25:19 event.event_scheduler -- common/autotest_common.sh@950 -- # kill -0 75265 00:07:19.415 18:25:19 event.event_scheduler -- common/autotest_common.sh@951 -- # uname 00:07:19.415 18:25:19 event.event_scheduler -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:19.415 18:25:19 event.event_scheduler -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75265 00:07:19.415 killing process with pid 75265 00:07:19.415 18:25:19 event.event_scheduler -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:07:19.415 18:25:19 event.event_scheduler -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:07:19.415 18:25:19 event.event_scheduler -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75265' 00:07:19.415 18:25:19 event.event_scheduler -- common/autotest_common.sh@965 -- # kill 75265 00:07:19.415 18:25:19 event.event_scheduler -- common/autotest_common.sh@970 -- # wait 75265 00:07:19.674 [2024-07-23 18:25:19.571935] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:19.934 ************************************ 00:07:19.934 END TEST event_scheduler 00:07:19.934 ************************************ 00:07:19.934 00:07:19.934 real 0m4.840s 00:07:19.934 user 0m8.981s 00:07:19.934 sys 0m0.443s 00:07:19.934 18:25:19 event.event_scheduler -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:19.934 18:25:19 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:19.934 18:25:19 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:19.934 18:25:19 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:19.934 18:25:19 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:19.934 18:25:19 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:19.934 18:25:19 event -- common/autotest_common.sh@10 -- # set +x 00:07:19.934 ************************************ 00:07:19.934 START TEST app_repeat 00:07:19.934 ************************************ 00:07:19.934 18:25:19 event.app_repeat -- common/autotest_common.sh@1121 -- # app_repeat_test 00:07:19.934 18:25:19 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:19.934 18:25:19 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:19.934 18:25:19 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:19.934 18:25:19 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:19.934 18:25:19 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:19.934 18:25:19 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:19.934 18:25:19 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:19.934 18:25:19 event.app_repeat -- event/event.sh@19 -- # repeat_pid=75360 00:07:19.934 18:25:19 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:19.934 18:25:19 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:19.934 18:25:19 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 75360' 00:07:19.934 Process app_repeat pid: 75360 00:07:19.934 18:25:19 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:19.934 18:25:19 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:19.934 spdk_app_start Round 0 00:07:19.934 18:25:19 event.app_repeat -- event/event.sh@25 -- # waitforlisten 75360 /var/tmp/spdk-nbd.sock 00:07:19.934 18:25:19 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 75360 ']' 00:07:19.934 18:25:19 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:19.934 18:25:19 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:19.934 18:25:19 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:19.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:19.934 18:25:19 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:19.934 18:25:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:19.934 [2024-07-23 18:25:19.963476] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:19.934 [2024-07-23 18:25:19.963717] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75360 ] 00:07:20.194 [2024-07-23 18:25:20.109722] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:20.194 [2024-07-23 18:25:20.161905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.194 [2024-07-23 18:25:20.162037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:20.762 18:25:20 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:20.762 18:25:20 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:07:20.762 18:25:20 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:21.021 Malloc0 00:07:21.021 18:25:20 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:21.281 Malloc1 00:07:21.282 18:25:21 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:21.282 18:25:21 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:21.282 18:25:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:21.282 18:25:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:21.282 18:25:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:21.282 18:25:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:21.282 18:25:21 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:21.282 18:25:21 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:21.282 18:25:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:21.282 18:25:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:21.282 18:25:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:21.282 18:25:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:21.282 18:25:21 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:21.282 18:25:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:21.282 18:25:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:21.282 18:25:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:21.540 /dev/nbd0 00:07:21.540 18:25:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:21.540 18:25:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:21.540 18:25:21 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:07:21.540 18:25:21 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:07:21.540 18:25:21 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:21.540 18:25:21 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:21.540 18:25:21 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:07:21.540 18:25:21 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:07:21.540 18:25:21 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:21.540 18:25:21 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:21.540 18:25:21 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:21.540 1+0 records in 00:07:21.540 1+0 records out 00:07:21.540 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000866719 s, 4.7 MB/s 00:07:21.540 18:25:21 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:21.540 18:25:21 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:07:21.540 18:25:21 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:21.540 18:25:21 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:21.540 18:25:21 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:07:21.540 18:25:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:21.540 18:25:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:21.540 18:25:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:21.540 /dev/nbd1 00:07:21.540 18:25:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:21.799 18:25:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:21.800 18:25:21 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:07:21.800 18:25:21 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:07:21.800 18:25:21 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:21.800 18:25:21 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:21.800 18:25:21 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:07:21.800 18:25:21 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:07:21.800 18:25:21 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:21.800 18:25:21 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:21.800 18:25:21 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:21.800 1+0 records in 00:07:21.800 1+0 records out 00:07:21.800 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000266253 s, 15.4 MB/s 00:07:21.800 18:25:21 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:21.800 18:25:21 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:07:21.800 18:25:21 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:21.800 18:25:21 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:21.800 18:25:21 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:07:21.800 18:25:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:21.800 18:25:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:21.800 18:25:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:21.800 18:25:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:21.800 18:25:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:21.800 18:25:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:21.800 { 00:07:21.800 "nbd_device": "/dev/nbd0", 00:07:21.800 "bdev_name": "Malloc0" 00:07:21.800 }, 00:07:21.800 { 00:07:21.800 "nbd_device": "/dev/nbd1", 00:07:21.800 "bdev_name": "Malloc1" 00:07:21.800 } 00:07:21.800 ]' 00:07:21.800 18:25:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:21.800 { 00:07:21.800 "nbd_device": "/dev/nbd0", 00:07:21.800 "bdev_name": "Malloc0" 00:07:21.800 }, 00:07:21.800 { 00:07:21.800 "nbd_device": "/dev/nbd1", 00:07:21.800 "bdev_name": "Malloc1" 00:07:21.800 } 00:07:21.800 ]' 00:07:21.800 18:25:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:22.059 18:25:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:22.059 /dev/nbd1' 00:07:22.059 18:25:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:22.059 18:25:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:22.059 /dev/nbd1' 00:07:22.059 18:25:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:22.059 18:25:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:22.059 18:25:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:22.059 18:25:21 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:22.059 18:25:21 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:22.059 18:25:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:22.059 18:25:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:22.059 18:25:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:22.059 18:25:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:22.059 18:25:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:22.059 18:25:21 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:22.059 256+0 records in 00:07:22.059 256+0 records out 00:07:22.059 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0118398 s, 88.6 MB/s 00:07:22.059 18:25:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:22.059 18:25:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:22.059 256+0 records in 00:07:22.059 256+0 records out 00:07:22.059 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0202319 s, 51.8 MB/s 00:07:22.059 18:25:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:22.059 18:25:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:22.059 256+0 records in 00:07:22.059 256+0 records out 00:07:22.059 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.028171 s, 37.2 MB/s 00:07:22.059 18:25:21 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:22.059 18:25:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:22.059 18:25:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:22.059 18:25:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:22.059 18:25:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:22.059 18:25:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:22.059 18:25:21 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:22.059 18:25:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:22.059 18:25:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:22.059 18:25:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:22.059 18:25:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:22.059 18:25:21 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:22.059 18:25:21 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:22.059 18:25:21 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:22.059 18:25:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:22.059 18:25:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:22.059 18:25:21 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:22.059 18:25:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:22.059 18:25:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:22.319 18:25:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:22.319 18:25:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:22.319 18:25:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:22.319 18:25:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:22.319 18:25:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:22.319 18:25:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:22.319 18:25:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:22.319 18:25:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:22.319 18:25:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:22.319 18:25:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:22.319 18:25:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:22.319 18:25:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:22.319 18:25:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:22.319 18:25:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:22.319 18:25:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:22.319 18:25:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:22.319 18:25:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:22.319 18:25:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:22.319 18:25:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:22.319 18:25:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:22.319 18:25:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:22.578 18:25:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:22.578 18:25:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:22.578 18:25:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:22.578 18:25:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:22.578 18:25:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:22.578 18:25:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:22.578 18:25:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:22.578 18:25:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:22.578 18:25:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:22.578 18:25:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:22.578 18:25:22 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:22.578 18:25:22 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:22.578 18:25:22 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:22.838 18:25:22 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:23.098 [2024-07-23 18:25:22.966560] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:23.098 [2024-07-23 18:25:23.008338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.098 [2024-07-23 18:25:23.008343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:23.098 [2024-07-23 18:25:23.051454] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:23.098 [2024-07-23 18:25:23.051516] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:26.391 spdk_app_start Round 1 00:07:26.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:26.391 18:25:25 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:26.391 18:25:25 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:26.391 18:25:25 event.app_repeat -- event/event.sh@25 -- # waitforlisten 75360 /var/tmp/spdk-nbd.sock 00:07:26.391 18:25:25 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 75360 ']' 00:07:26.391 18:25:25 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:26.391 18:25:25 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:26.391 18:25:25 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:26.391 18:25:25 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:26.391 18:25:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:26.391 18:25:25 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:26.391 18:25:25 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:07:26.391 18:25:25 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:26.391 Malloc0 00:07:26.391 18:25:26 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:26.391 Malloc1 00:07:26.391 18:25:26 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:26.391 18:25:26 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:26.391 18:25:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:26.391 18:25:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:26.391 18:25:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:26.391 18:25:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:26.391 18:25:26 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:26.391 18:25:26 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:26.391 18:25:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:26.391 18:25:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:26.391 18:25:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:26.391 18:25:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:26.391 18:25:26 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:26.391 18:25:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:26.391 18:25:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:26.391 18:25:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:26.650 /dev/nbd0 00:07:26.650 18:25:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:26.650 18:25:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:26.650 18:25:26 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:07:26.650 18:25:26 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:07:26.650 18:25:26 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:26.650 18:25:26 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:26.650 18:25:26 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:07:26.650 18:25:26 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:07:26.650 18:25:26 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:26.650 18:25:26 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:26.650 18:25:26 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:26.650 1+0 records in 00:07:26.650 1+0 records out 00:07:26.650 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000413383 s, 9.9 MB/s 00:07:26.650 18:25:26 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:26.650 18:25:26 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:07:26.650 18:25:26 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:26.650 18:25:26 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:26.650 18:25:26 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:07:26.650 18:25:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:26.650 18:25:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:26.650 18:25:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:26.911 /dev/nbd1 00:07:26.911 18:25:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:26.911 18:25:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:26.911 18:25:26 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:07:26.911 18:25:26 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:07:26.911 18:25:26 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:26.911 18:25:26 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:26.911 18:25:26 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:07:26.911 18:25:26 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:07:26.911 18:25:26 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:26.911 18:25:26 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:26.911 18:25:26 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:26.911 1+0 records in 00:07:26.911 1+0 records out 00:07:26.911 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000385707 s, 10.6 MB/s 00:07:26.911 18:25:26 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:26.911 18:25:26 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:07:26.911 18:25:26 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:26.912 18:25:26 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:26.912 18:25:26 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:07:26.912 18:25:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:26.912 18:25:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:26.912 18:25:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:26.912 18:25:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:26.912 18:25:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:27.170 18:25:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:27.170 { 00:07:27.170 "nbd_device": "/dev/nbd0", 00:07:27.170 "bdev_name": "Malloc0" 00:07:27.170 }, 00:07:27.170 { 00:07:27.171 "nbd_device": "/dev/nbd1", 00:07:27.171 "bdev_name": "Malloc1" 00:07:27.171 } 00:07:27.171 ]' 00:07:27.171 18:25:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:27.171 18:25:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:27.171 { 00:07:27.171 "nbd_device": "/dev/nbd0", 00:07:27.171 "bdev_name": "Malloc0" 00:07:27.171 }, 00:07:27.171 { 00:07:27.171 "nbd_device": "/dev/nbd1", 00:07:27.171 "bdev_name": "Malloc1" 00:07:27.171 } 00:07:27.171 ]' 00:07:27.171 18:25:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:27.171 /dev/nbd1' 00:07:27.171 18:25:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:27.171 /dev/nbd1' 00:07:27.171 18:25:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:27.171 18:25:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:27.171 18:25:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:27.171 18:25:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:27.171 18:25:27 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:27.171 18:25:27 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:27.171 18:25:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:27.171 18:25:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:27.171 18:25:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:27.171 18:25:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:27.171 18:25:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:27.171 18:25:27 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:27.171 256+0 records in 00:07:27.171 256+0 records out 00:07:27.171 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00914927 s, 115 MB/s 00:07:27.171 18:25:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:27.171 18:25:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:27.171 256+0 records in 00:07:27.171 256+0 records out 00:07:27.171 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0262116 s, 40.0 MB/s 00:07:27.171 18:25:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:27.171 18:25:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:27.171 256+0 records in 00:07:27.171 256+0 records out 00:07:27.171 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0233855 s, 44.8 MB/s 00:07:27.171 18:25:27 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:27.171 18:25:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:27.171 18:25:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:27.171 18:25:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:27.171 18:25:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:27.171 18:25:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:27.171 18:25:27 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:27.171 18:25:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:27.171 18:25:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:27.171 18:25:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:27.171 18:25:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:27.171 18:25:27 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:27.171 18:25:27 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:27.171 18:25:27 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:27.171 18:25:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:27.171 18:25:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:27.171 18:25:27 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:27.171 18:25:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:27.171 18:25:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:27.430 18:25:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:27.430 18:25:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:27.430 18:25:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:27.430 18:25:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:27.430 18:25:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:27.430 18:25:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:27.430 18:25:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:27.430 18:25:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:27.430 18:25:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:27.430 18:25:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:27.689 18:25:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:27.689 18:25:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:27.689 18:25:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:27.689 18:25:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:27.689 18:25:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:27.689 18:25:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:27.689 18:25:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:27.689 18:25:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:27.689 18:25:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:27.689 18:25:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:27.689 18:25:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:27.689 18:25:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:27.689 18:25:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:27.689 18:25:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:27.948 18:25:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:27.948 18:25:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:27.948 18:25:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:27.948 18:25:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:27.948 18:25:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:27.948 18:25:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:27.948 18:25:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:27.948 18:25:27 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:27.948 18:25:27 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:27.948 18:25:27 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:28.207 18:25:28 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:28.465 [2024-07-23 18:25:28.333060] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:28.466 [2024-07-23 18:25:28.412214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.466 [2024-07-23 18:25:28.412236] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:28.466 [2024-07-23 18:25:28.488879] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:28.466 [2024-07-23 18:25:28.488965] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:31.001 spdk_app_start Round 2 00:07:31.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:31.001 18:25:31 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:31.001 18:25:31 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:31.001 18:25:31 event.app_repeat -- event/event.sh@25 -- # waitforlisten 75360 /var/tmp/spdk-nbd.sock 00:07:31.001 18:25:31 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 75360 ']' 00:07:31.001 18:25:31 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:31.001 18:25:31 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:31.001 18:25:31 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:31.001 18:25:31 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:31.001 18:25:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:31.260 18:25:31 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:31.260 18:25:31 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:07:31.260 18:25:31 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:31.518 Malloc0 00:07:31.518 18:25:31 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:31.777 Malloc1 00:07:31.777 18:25:31 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:31.777 18:25:31 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:31.777 18:25:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:31.777 18:25:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:31.777 18:25:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:31.777 18:25:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:31.777 18:25:31 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:31.777 18:25:31 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:31.777 18:25:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:31.777 18:25:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:31.777 18:25:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:31.777 18:25:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:31.777 18:25:31 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:31.777 18:25:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:31.777 18:25:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:31.777 18:25:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:32.038 /dev/nbd0 00:07:32.038 18:25:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:32.038 18:25:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:32.038 18:25:31 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:07:32.038 18:25:31 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:07:32.038 18:25:31 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:32.038 18:25:31 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:32.038 18:25:31 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:07:32.038 18:25:31 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:07:32.038 18:25:31 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:32.038 18:25:31 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:32.038 18:25:31 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:32.038 1+0 records in 00:07:32.038 1+0 records out 00:07:32.038 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000537737 s, 7.6 MB/s 00:07:32.038 18:25:31 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:32.038 18:25:31 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:07:32.038 18:25:31 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:32.038 18:25:31 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:32.038 18:25:31 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:07:32.038 18:25:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:32.038 18:25:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:32.038 18:25:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:32.038 /dev/nbd1 00:07:32.298 18:25:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:32.298 18:25:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:32.298 18:25:32 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:07:32.298 18:25:32 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:07:32.298 18:25:32 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:32.298 18:25:32 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:32.298 18:25:32 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:07:32.298 18:25:32 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:07:32.298 18:25:32 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:32.298 18:25:32 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:32.298 18:25:32 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:32.298 1+0 records in 00:07:32.298 1+0 records out 00:07:32.298 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000643115 s, 6.4 MB/s 00:07:32.298 18:25:32 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:32.298 18:25:32 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:07:32.298 18:25:32 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:32.298 18:25:32 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:32.298 18:25:32 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:07:32.298 18:25:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:32.298 18:25:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:32.298 18:25:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:32.298 18:25:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:32.298 18:25:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:32.298 18:25:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:32.298 { 00:07:32.298 "nbd_device": "/dev/nbd0", 00:07:32.298 "bdev_name": "Malloc0" 00:07:32.298 }, 00:07:32.298 { 00:07:32.298 "nbd_device": "/dev/nbd1", 00:07:32.298 "bdev_name": "Malloc1" 00:07:32.298 } 00:07:32.298 ]' 00:07:32.298 18:25:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:32.298 { 00:07:32.298 "nbd_device": "/dev/nbd0", 00:07:32.298 "bdev_name": "Malloc0" 00:07:32.298 }, 00:07:32.298 { 00:07:32.298 "nbd_device": "/dev/nbd1", 00:07:32.298 "bdev_name": "Malloc1" 00:07:32.298 } 00:07:32.298 ]' 00:07:32.298 18:25:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:32.558 18:25:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:32.558 /dev/nbd1' 00:07:32.558 18:25:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:32.558 /dev/nbd1' 00:07:32.558 18:25:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:32.558 18:25:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:32.558 18:25:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:32.558 18:25:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:32.558 18:25:32 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:32.558 18:25:32 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:32.558 18:25:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:32.558 18:25:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:32.558 18:25:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:32.559 18:25:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:32.559 18:25:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:32.559 18:25:32 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:32.559 256+0 records in 00:07:32.559 256+0 records out 00:07:32.559 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.005027 s, 209 MB/s 00:07:32.559 18:25:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:32.559 18:25:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:32.559 256+0 records in 00:07:32.559 256+0 records out 00:07:32.559 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0254365 s, 41.2 MB/s 00:07:32.559 18:25:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:32.559 18:25:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:32.559 256+0 records in 00:07:32.559 256+0 records out 00:07:32.559 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.023882 s, 43.9 MB/s 00:07:32.559 18:25:32 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:32.559 18:25:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:32.559 18:25:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:32.559 18:25:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:32.559 18:25:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:32.559 18:25:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:32.559 18:25:32 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:32.559 18:25:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:32.559 18:25:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:32.559 18:25:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:32.559 18:25:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:32.559 18:25:32 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:32.559 18:25:32 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:32.559 18:25:32 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:32.559 18:25:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:32.559 18:25:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:32.559 18:25:32 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:32.559 18:25:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:32.559 18:25:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:32.823 18:25:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:32.823 18:25:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:32.823 18:25:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:32.823 18:25:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:32.823 18:25:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:32.823 18:25:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:32.823 18:25:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:32.823 18:25:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:32.823 18:25:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:32.823 18:25:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:32.823 18:25:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:32.823 18:25:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:32.823 18:25:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:32.823 18:25:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:32.823 18:25:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:32.823 18:25:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:32.823 18:25:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:32.823 18:25:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:32.824 18:25:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:32.824 18:25:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:32.824 18:25:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:33.089 18:25:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:33.089 18:25:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:33.089 18:25:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:33.089 18:25:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:33.089 18:25:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:33.089 18:25:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:33.089 18:25:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:33.089 18:25:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:33.089 18:25:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:33.089 18:25:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:33.089 18:25:33 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:33.089 18:25:33 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:33.089 18:25:33 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:33.348 18:25:33 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:33.612 [2024-07-23 18:25:33.618856] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:33.878 [2024-07-23 18:25:33.692685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.878 [2024-07-23 18:25:33.692692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:33.878 [2024-07-23 18:25:33.770071] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:33.878 [2024-07-23 18:25:33.770140] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:36.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:36.419 18:25:36 event.app_repeat -- event/event.sh@38 -- # waitforlisten 75360 /var/tmp/spdk-nbd.sock 00:07:36.419 18:25:36 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 75360 ']' 00:07:36.419 18:25:36 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:36.419 18:25:36 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:36.419 18:25:36 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:36.419 18:25:36 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:36.419 18:25:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:36.679 18:25:36 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:36.679 18:25:36 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:07:36.679 18:25:36 event.app_repeat -- event/event.sh@39 -- # killprocess 75360 00:07:36.679 18:25:36 event.app_repeat -- common/autotest_common.sh@946 -- # '[' -z 75360 ']' 00:07:36.679 18:25:36 event.app_repeat -- common/autotest_common.sh@950 -- # kill -0 75360 00:07:36.679 18:25:36 event.app_repeat -- common/autotest_common.sh@951 -- # uname 00:07:36.679 18:25:36 event.app_repeat -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:36.679 18:25:36 event.app_repeat -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75360 00:07:36.679 killing process with pid 75360 00:07:36.679 18:25:36 event.app_repeat -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:36.679 18:25:36 event.app_repeat -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:36.679 18:25:36 event.app_repeat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75360' 00:07:36.679 18:25:36 event.app_repeat -- common/autotest_common.sh@965 -- # kill 75360 00:07:36.679 18:25:36 event.app_repeat -- common/autotest_common.sh@970 -- # wait 75360 00:07:36.939 spdk_app_start is called in Round 0. 00:07:36.939 Shutdown signal received, stop current app iteration 00:07:36.939 Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 reinitialization... 00:07:36.939 spdk_app_start is called in Round 1. 00:07:36.939 Shutdown signal received, stop current app iteration 00:07:36.939 Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 reinitialization... 00:07:36.939 spdk_app_start is called in Round 2. 00:07:36.939 Shutdown signal received, stop current app iteration 00:07:36.939 Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 reinitialization... 00:07:36.939 spdk_app_start is called in Round 3. 00:07:36.939 Shutdown signal received, stop current app iteration 00:07:36.939 18:25:36 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:36.939 18:25:36 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:36.939 00:07:36.939 real 0m16.957s 00:07:36.939 user 0m36.746s 00:07:36.939 sys 0m2.527s 00:07:36.939 18:25:36 event.app_repeat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:36.939 18:25:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:36.939 ************************************ 00:07:36.939 END TEST app_repeat 00:07:36.939 ************************************ 00:07:36.939 18:25:36 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:36.939 18:25:36 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:36.939 18:25:36 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:36.939 18:25:36 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:36.939 18:25:36 event -- common/autotest_common.sh@10 -- # set +x 00:07:36.939 ************************************ 00:07:36.939 START TEST cpu_locks 00:07:36.939 ************************************ 00:07:36.939 18:25:36 event.cpu_locks -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:37.199 * Looking for test storage... 00:07:37.199 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:37.199 18:25:37 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:37.199 18:25:37 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:37.199 18:25:37 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:37.199 18:25:37 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:37.199 18:25:37 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:37.199 18:25:37 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:37.199 18:25:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:37.199 ************************************ 00:07:37.199 START TEST default_locks 00:07:37.199 ************************************ 00:07:37.199 18:25:37 event.cpu_locks.default_locks -- common/autotest_common.sh@1121 -- # default_locks 00:07:37.199 18:25:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=75778 00:07:37.199 18:25:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 75778 00:07:37.199 18:25:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:37.199 18:25:37 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 75778 ']' 00:07:37.199 18:25:37 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:37.199 18:25:37 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:37.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:37.199 18:25:37 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:37.199 18:25:37 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:37.199 18:25:37 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:37.199 [2024-07-23 18:25:37.165612] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:37.199 [2024-07-23 18:25:37.165757] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75778 ] 00:07:37.458 [2024-07-23 18:25:37.313221] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.458 [2024-07-23 18:25:37.389143] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.028 18:25:37 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:38.028 18:25:37 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 0 00:07:38.028 18:25:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 75778 00:07:38.028 18:25:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 75778 00:07:38.028 18:25:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:38.288 18:25:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 75778 00:07:38.288 18:25:38 event.cpu_locks.default_locks -- common/autotest_common.sh@946 -- # '[' -z 75778 ']' 00:07:38.288 18:25:38 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # kill -0 75778 00:07:38.288 18:25:38 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # uname 00:07:38.288 18:25:38 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:38.288 18:25:38 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75778 00:07:38.288 18:25:38 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:38.288 18:25:38 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:38.288 killing process with pid 75778 00:07:38.288 18:25:38 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75778' 00:07:38.288 18:25:38 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # kill 75778 00:07:38.288 18:25:38 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # wait 75778 00:07:39.247 18:25:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 75778 00:07:39.247 18:25:38 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:07:39.247 18:25:38 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 75778 00:07:39.247 18:25:38 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:39.247 18:25:38 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:39.247 18:25:38 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:39.247 18:25:38 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:39.247 18:25:38 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 75778 00:07:39.247 18:25:38 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 75778 ']' 00:07:39.247 18:25:38 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.247 18:25:38 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:39.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.247 18:25:38 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.247 18:25:38 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:39.247 18:25:38 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:39.247 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 842: kill: (75778) - No such process 00:07:39.247 ERROR: process (pid: 75778) is no longer running 00:07:39.247 18:25:38 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:39.247 18:25:38 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 1 00:07:39.247 18:25:38 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:07:39.247 18:25:38 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:39.247 18:25:38 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:39.247 18:25:38 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:39.247 18:25:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:39.247 18:25:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:39.247 18:25:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:39.247 18:25:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:39.247 00:07:39.247 real 0m1.877s 00:07:39.247 user 0m1.668s 00:07:39.247 sys 0m0.706s 00:07:39.247 18:25:38 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:39.247 18:25:38 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:39.247 ************************************ 00:07:39.247 END TEST default_locks 00:07:39.247 ************************************ 00:07:39.247 18:25:38 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:39.247 18:25:38 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:39.247 18:25:38 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:39.247 18:25:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:39.247 ************************************ 00:07:39.247 START TEST default_locks_via_rpc 00:07:39.247 ************************************ 00:07:39.247 18:25:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1121 -- # default_locks_via_rpc 00:07:39.247 18:25:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=75826 00:07:39.247 18:25:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:39.247 18:25:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 75826 00:07:39.247 18:25:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 75826 ']' 00:07:39.247 18:25:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.247 18:25:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:39.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.247 18:25:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.247 18:25:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:39.247 18:25:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.247 [2024-07-23 18:25:39.104227] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:39.247 [2024-07-23 18:25:39.104362] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75826 ] 00:07:39.247 [2024-07-23 18:25:39.252518] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.507 [2024-07-23 18:25:39.331340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.075 18:25:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:40.075 18:25:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:40.075 18:25:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:40.075 18:25:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.075 18:25:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:40.075 18:25:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.075 18:25:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:40.075 18:25:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:40.075 18:25:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:40.075 18:25:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:40.075 18:25:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:40.075 18:25:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.075 18:25:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:40.075 18:25:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.075 18:25:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 75826 00:07:40.075 18:25:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 75826 00:07:40.075 18:25:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:40.334 18:25:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 75826 00:07:40.334 18:25:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@946 -- # '[' -z 75826 ']' 00:07:40.334 18:25:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # kill -0 75826 00:07:40.334 18:25:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # uname 00:07:40.334 18:25:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:40.334 18:25:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75826 00:07:40.334 18:25:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:40.334 18:25:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:40.334 killing process with pid 75826 00:07:40.334 18:25:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75826' 00:07:40.334 18:25:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # kill 75826 00:07:40.334 18:25:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # wait 75826 00:07:41.270 00:07:41.270 real 0m1.983s 00:07:41.270 user 0m1.809s 00:07:41.270 sys 0m0.721s 00:07:41.270 18:25:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:41.270 18:25:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:41.270 ************************************ 00:07:41.270 END TEST default_locks_via_rpc 00:07:41.270 ************************************ 00:07:41.270 18:25:41 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:41.270 18:25:41 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:41.270 18:25:41 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:41.270 18:25:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:41.270 ************************************ 00:07:41.270 START TEST non_locking_app_on_locked_coremask 00:07:41.270 ************************************ 00:07:41.270 18:25:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # non_locking_app_on_locked_coremask 00:07:41.270 18:25:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=75878 00:07:41.270 18:25:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 75878 /var/tmp/spdk.sock 00:07:41.270 18:25:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:41.270 18:25:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 75878 ']' 00:07:41.270 18:25:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.270 18:25:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:41.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.270 18:25:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.270 18:25:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:41.270 18:25:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:41.270 [2024-07-23 18:25:41.154002] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:41.270 [2024-07-23 18:25:41.154120] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75878 ] 00:07:41.270 [2024-07-23 18:25:41.300421] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.529 [2024-07-23 18:25:41.378922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.097 18:25:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:42.097 18:25:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:42.097 18:25:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=75894 00:07:42.097 18:25:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 75894 /var/tmp/spdk2.sock 00:07:42.097 18:25:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:42.097 18:25:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 75894 ']' 00:07:42.097 18:25:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:42.097 18:25:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:42.097 18:25:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:42.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:42.097 18:25:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:42.097 18:25:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:42.097 [2024-07-23 18:25:42.031549] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:42.097 [2024-07-23 18:25:42.031719] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75894 ] 00:07:42.356 [2024-07-23 18:25:42.171610] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:42.356 [2024-07-23 18:25:42.171664] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.356 [2024-07-23 18:25:42.327035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.925 18:25:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:42.925 18:25:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:42.925 18:25:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 75878 00:07:42.925 18:25:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 75878 00:07:42.925 18:25:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:43.497 18:25:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 75878 00:07:43.497 18:25:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 75878 ']' 00:07:43.497 18:25:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 75878 00:07:43.497 18:25:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:07:43.497 18:25:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:43.497 18:25:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75878 00:07:43.497 18:25:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:43.497 18:25:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:43.497 18:25:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75878' 00:07:43.497 killing process with pid 75878 00:07:43.497 18:25:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 75878 00:07:43.497 18:25:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 75878 00:07:44.871 18:25:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 75894 00:07:44.871 18:25:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 75894 ']' 00:07:44.871 18:25:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 75894 00:07:44.871 18:25:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:07:44.871 18:25:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:44.871 18:25:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75894 00:07:44.871 18:25:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:44.871 18:25:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:44.871 killing process with pid 75894 00:07:44.871 18:25:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75894' 00:07:44.871 18:25:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 75894 00:07:44.871 18:25:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 75894 00:07:45.445 00:07:45.445 real 0m4.275s 00:07:45.445 user 0m4.167s 00:07:45.445 sys 0m1.285s 00:07:45.445 18:25:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:45.445 18:25:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:45.445 ************************************ 00:07:45.445 END TEST non_locking_app_on_locked_coremask 00:07:45.445 ************************************ 00:07:45.445 18:25:45 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:45.445 18:25:45 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:45.445 18:25:45 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:45.445 18:25:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:45.445 ************************************ 00:07:45.445 START TEST locking_app_on_unlocked_coremask 00:07:45.445 ************************************ 00:07:45.445 18:25:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_unlocked_coremask 00:07:45.445 18:25:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=75963 00:07:45.445 18:25:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:45.445 18:25:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 75963 /var/tmp/spdk.sock 00:07:45.445 18:25:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 75963 ']' 00:07:45.445 18:25:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.445 18:25:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:45.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.445 18:25:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.445 18:25:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:45.445 18:25:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:45.445 [2024-07-23 18:25:45.494748] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:45.445 [2024-07-23 18:25:45.494871] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75963 ] 00:07:45.704 [2024-07-23 18:25:45.641791] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:45.704 [2024-07-23 18:25:45.641871] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.704 [2024-07-23 18:25:45.720156] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.273 18:25:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:46.273 18:25:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:46.273 18:25:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=75979 00:07:46.273 18:25:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 75979 /var/tmp/spdk2.sock 00:07:46.273 18:25:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 75979 ']' 00:07:46.273 18:25:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:46.273 18:25:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:46.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:46.273 18:25:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:46.273 18:25:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:46.273 18:25:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:46.273 18:25:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:46.533 [2024-07-23 18:25:46.389727] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:46.533 [2024-07-23 18:25:46.389859] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75979 ] 00:07:46.533 [2024-07-23 18:25:46.531952] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.792 [2024-07-23 18:25:46.681075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.360 18:25:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:47.360 18:25:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:47.361 18:25:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 75979 00:07:47.361 18:25:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 75979 00:07:47.361 18:25:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:47.620 18:25:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 75963 00:07:47.620 18:25:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 75963 ']' 00:07:47.620 18:25:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 75963 00:07:47.620 18:25:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:07:47.620 18:25:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:47.620 18:25:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75963 00:07:47.620 18:25:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:47.620 18:25:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:47.620 killing process with pid 75963 00:07:47.620 18:25:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75963' 00:07:47.620 18:25:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 75963 00:07:47.620 18:25:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 75963 00:07:49.002 18:25:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 75979 00:07:49.002 18:25:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 75979 ']' 00:07:49.002 18:25:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 75979 00:07:49.002 18:25:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:07:49.002 18:25:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:49.002 18:25:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75979 00:07:49.002 18:25:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:49.002 18:25:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:49.002 killing process with pid 75979 00:07:49.002 18:25:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75979' 00:07:49.002 18:25:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 75979 00:07:49.002 18:25:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 75979 00:07:49.571 00:07:49.571 real 0m4.121s 00:07:49.571 user 0m3.977s 00:07:49.571 sys 0m1.236s 00:07:49.571 18:25:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:49.571 18:25:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:49.571 ************************************ 00:07:49.571 END TEST locking_app_on_unlocked_coremask 00:07:49.571 ************************************ 00:07:49.571 18:25:49 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:49.571 18:25:49 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:49.571 18:25:49 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:49.571 18:25:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:49.571 ************************************ 00:07:49.571 START TEST locking_app_on_locked_coremask 00:07:49.571 ************************************ 00:07:49.571 18:25:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_locked_coremask 00:07:49.571 18:25:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=76048 00:07:49.571 18:25:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:49.571 18:25:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 76048 /var/tmp/spdk.sock 00:07:49.571 18:25:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 76048 ']' 00:07:49.571 18:25:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.571 18:25:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:49.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.571 18:25:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.571 18:25:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:49.571 18:25:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:49.830 [2024-07-23 18:25:49.676943] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:49.830 [2024-07-23 18:25:49.677052] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76048 ] 00:07:49.830 [2024-07-23 18:25:49.824732] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.090 [2024-07-23 18:25:49.901449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.659 18:25:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:50.659 18:25:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:50.659 18:25:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:50.659 18:25:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=76064 00:07:50.659 18:25:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 76064 /var/tmp/spdk2.sock 00:07:50.659 18:25:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:50.659 18:25:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 76064 /var/tmp/spdk2.sock 00:07:50.659 18:25:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:50.659 18:25:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:50.659 18:25:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:50.659 18:25:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:50.659 18:25:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 76064 /var/tmp/spdk2.sock 00:07:50.659 18:25:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 76064 ']' 00:07:50.659 18:25:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:50.659 18:25:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:50.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:50.659 18:25:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:50.659 18:25:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:50.659 18:25:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:50.659 [2024-07-23 18:25:50.593363] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:50.660 [2024-07-23 18:25:50.593617] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76064 ] 00:07:50.918 [2024-07-23 18:25:50.742546] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 76048 has claimed it. 00:07:50.918 [2024-07-23 18:25:50.742613] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:51.177 ERROR: process (pid: 76064) is no longer running 00:07:51.177 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 842: kill: (76064) - No such process 00:07:51.177 18:25:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:51.177 18:25:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 1 00:07:51.177 18:25:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:51.177 18:25:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:51.177 18:25:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:51.177 18:25:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:51.177 18:25:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 76048 00:07:51.177 18:25:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 76048 00:07:51.177 18:25:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:51.748 18:25:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 76048 00:07:51.748 18:25:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 76048 ']' 00:07:51.748 18:25:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 76048 00:07:51.748 18:25:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:07:51.748 18:25:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:51.748 18:25:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 76048 00:07:51.748 18:25:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:51.748 18:25:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:51.748 killing process with pid 76048 00:07:51.748 18:25:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 76048' 00:07:51.748 18:25:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 76048 00:07:51.748 18:25:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 76048 00:07:52.007 00:07:52.007 real 0m2.396s 00:07:52.007 user 0m2.460s 00:07:52.007 sys 0m0.819s 00:07:52.007 18:25:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:52.007 18:25:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:52.007 ************************************ 00:07:52.007 END TEST locking_app_on_locked_coremask 00:07:52.007 ************************************ 00:07:52.007 18:25:52 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:52.007 18:25:52 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:52.007 18:25:52 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:52.007 18:25:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:52.007 ************************************ 00:07:52.007 START TEST locking_overlapped_coremask 00:07:52.007 ************************************ 00:07:52.007 18:25:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask 00:07:52.007 18:25:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=76117 00:07:52.007 18:25:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:52.007 18:25:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 76117 /var/tmp/spdk.sock 00:07:52.007 18:25:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 76117 ']' 00:07:52.007 18:25:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.007 18:25:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:52.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.007 18:25:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.007 18:25:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:52.007 18:25:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:52.265 [2024-07-23 18:25:52.154181] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:52.265 [2024-07-23 18:25:52.154330] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76117 ] 00:07:52.265 [2024-07-23 18:25:52.304025] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:52.524 [2024-07-23 18:25:52.384086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:52.524 [2024-07-23 18:25:52.384197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.524 [2024-07-23 18:25:52.384954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:53.092 18:25:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:53.092 18:25:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:53.092 18:25:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:53.092 18:25:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=76129 00:07:53.092 18:25:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 76129 /var/tmp/spdk2.sock 00:07:53.092 18:25:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:53.092 18:25:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 76129 /var/tmp/spdk2.sock 00:07:53.092 18:25:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:53.092 18:25:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:53.092 18:25:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:53.092 18:25:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:53.092 18:25:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 76129 /var/tmp/spdk2.sock 00:07:53.092 18:25:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 76129 ']' 00:07:53.092 18:25:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:53.092 18:25:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:53.092 18:25:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:53.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:53.092 18:25:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:53.092 18:25:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:53.092 [2024-07-23 18:25:52.991784] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:53.092 [2024-07-23 18:25:52.991894] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76129 ] 00:07:53.092 [2024-07-23 18:25:53.133734] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 76117 has claimed it. 00:07:53.092 [2024-07-23 18:25:53.134103] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:53.661 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 842: kill: (76129) - No such process 00:07:53.661 ERROR: process (pid: 76129) is no longer running 00:07:53.661 18:25:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:53.662 18:25:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 1 00:07:53.662 18:25:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:53.662 18:25:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:53.662 18:25:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:53.662 18:25:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:53.662 18:25:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:53.662 18:25:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:53.662 18:25:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:53.662 18:25:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:53.662 18:25:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 76117 00:07:53.662 18:25:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@946 -- # '[' -z 76117 ']' 00:07:53.662 18:25:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # kill -0 76117 00:07:53.662 18:25:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # uname 00:07:53.662 18:25:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:53.662 18:25:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 76117 00:07:53.662 18:25:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:53.662 18:25:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:53.662 18:25:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 76117' 00:07:53.662 killing process with pid 76117 00:07:53.662 18:25:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # kill 76117 00:07:53.662 18:25:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # wait 76117 00:07:54.231 00:07:54.231 real 0m2.233s 00:07:54.231 user 0m5.604s 00:07:54.231 sys 0m0.610s 00:07:54.231 18:25:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:54.231 18:25:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:54.231 ************************************ 00:07:54.231 END TEST locking_overlapped_coremask 00:07:54.231 ************************************ 00:07:54.491 18:25:54 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:54.491 18:25:54 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:54.491 18:25:54 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:54.491 18:25:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:54.491 ************************************ 00:07:54.491 START TEST locking_overlapped_coremask_via_rpc 00:07:54.491 ************************************ 00:07:54.491 18:25:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask_via_rpc 00:07:54.491 18:25:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=76177 00:07:54.491 18:25:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 76177 /var/tmp/spdk.sock 00:07:54.491 18:25:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:54.491 18:25:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 76177 ']' 00:07:54.491 18:25:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.491 18:25:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:54.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.491 18:25:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.491 18:25:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:54.491 18:25:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:54.491 [2024-07-23 18:25:54.439904] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:54.491 [2024-07-23 18:25:54.440032] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76177 ] 00:07:54.751 [2024-07-23 18:25:54.588813] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:54.751 [2024-07-23 18:25:54.588887] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:54.751 [2024-07-23 18:25:54.682857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:54.751 [2024-07-23 18:25:54.682984] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:54.751 [2024-07-23 18:25:54.682873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.320 18:25:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:55.320 18:25:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:55.320 18:25:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=76195 00:07:55.320 18:25:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 76195 /var/tmp/spdk2.sock 00:07:55.320 18:25:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:55.320 18:25:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 76195 ']' 00:07:55.320 18:25:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:55.320 18:25:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:55.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:55.320 18:25:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:55.320 18:25:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:55.320 18:25:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:55.320 [2024-07-23 18:25:55.307242] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:55.320 [2024-07-23 18:25:55.307380] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76195 ] 00:07:55.580 [2024-07-23 18:25:55.446205] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:55.580 [2024-07-23 18:25:55.449594] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:55.580 [2024-07-23 18:25:55.544891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:55.580 [2024-07-23 18:25:55.544808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:55.580 [2024-07-23 18:25:55.544997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:07:56.148 18:25:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:56.148 18:25:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:56.148 18:25:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:56.148 18:25:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.148 18:25:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:56.148 18:25:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.148 18:25:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:56.148 18:25:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:56.148 18:25:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:56.148 18:25:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:07:56.148 18:25:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:56.148 18:25:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:07:56.148 18:25:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:56.148 18:25:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:56.148 18:25:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.148 18:25:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:56.148 [2024-07-23 18:25:56.116774] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 76177 has claimed it. 00:07:56.148 request: 00:07:56.148 { 00:07:56.148 "method": "framework_enable_cpumask_locks", 00:07:56.148 "req_id": 1 00:07:56.148 } 00:07:56.148 Got JSON-RPC error response 00:07:56.148 response: 00:07:56.148 { 00:07:56.148 "code": -32603, 00:07:56.148 "message": "Failed to claim CPU core: 2" 00:07:56.148 } 00:07:56.148 18:25:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:07:56.148 18:25:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:56.148 18:25:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:56.148 18:25:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:56.148 18:25:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:56.148 18:25:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 76177 /var/tmp/spdk.sock 00:07:56.148 18:25:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 76177 ']' 00:07:56.148 18:25:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:56.148 18:25:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:56.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:56.148 18:25:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:56.148 18:25:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:56.148 18:25:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:56.408 18:25:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:56.408 18:25:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:56.408 18:25:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 76195 /var/tmp/spdk2.sock 00:07:56.408 18:25:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 76195 ']' 00:07:56.408 18:25:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:56.408 18:25:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:56.408 18:25:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:56.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:56.408 18:25:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:56.408 18:25:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:56.668 18:25:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:56.668 18:25:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:56.668 18:25:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:56.668 18:25:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:56.668 18:25:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:56.668 18:25:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:56.668 00:07:56.668 real 0m2.185s 00:07:56.668 user 0m0.968s 00:07:56.668 sys 0m0.150s 00:07:56.668 18:25:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:56.668 18:25:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:56.668 ************************************ 00:07:56.668 END TEST locking_overlapped_coremask_via_rpc 00:07:56.668 ************************************ 00:07:56.668 18:25:56 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:56.668 18:25:56 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 76177 ]] 00:07:56.668 18:25:56 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 76177 00:07:56.668 18:25:56 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 76177 ']' 00:07:56.668 18:25:56 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 76177 00:07:56.668 18:25:56 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:07:56.668 18:25:56 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:56.668 18:25:56 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 76177 00:07:56.668 18:25:56 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:56.668 18:25:56 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:56.668 18:25:56 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 76177' 00:07:56.668 killing process with pid 76177 00:07:56.668 18:25:56 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 76177 00:07:56.668 18:25:56 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 76177 00:07:57.237 18:25:57 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 76195 ]] 00:07:57.237 18:25:57 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 76195 00:07:57.237 18:25:57 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 76195 ']' 00:07:57.237 18:25:57 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 76195 00:07:57.237 18:25:57 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:07:57.237 18:25:57 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:57.237 18:25:57 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 76195 00:07:57.496 18:25:57 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:07:57.496 18:25:57 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:07:57.496 18:25:57 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 76195' 00:07:57.496 killing process with pid 76195 00:07:57.496 18:25:57 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 76195 00:07:57.496 18:25:57 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 76195 00:07:57.755 18:25:57 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:57.755 18:25:57 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:57.755 18:25:57 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 76177 ]] 00:07:57.755 18:25:57 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 76177 00:07:57.755 18:25:57 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 76177 ']' 00:07:57.755 18:25:57 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 76177 00:07:57.755 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (76177) - No such process 00:07:57.755 Process with pid 76177 is not found 00:07:57.755 18:25:57 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 76177 is not found' 00:07:57.755 18:25:57 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 76195 ]] 00:07:57.755 18:25:57 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 76195 00:07:57.755 18:25:57 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 76195 ']' 00:07:57.755 18:25:57 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 76195 00:07:57.755 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (76195) - No such process 00:07:57.755 18:25:57 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 76195 is not found' 00:07:57.755 Process with pid 76195 is not found 00:07:57.755 18:25:57 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:57.755 00:07:57.755 real 0m20.765s 00:07:57.755 user 0m32.327s 00:07:57.755 sys 0m6.781s 00:07:57.755 18:25:57 event.cpu_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:57.755 18:25:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:57.755 ************************************ 00:07:57.755 END TEST cpu_locks 00:07:57.755 ************************************ 00:07:57.755 00:07:57.755 real 0m47.044s 00:07:57.755 user 1m24.607s 00:07:57.755 sys 0m10.352s 00:07:57.755 18:25:57 event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:57.755 18:25:57 event -- common/autotest_common.sh@10 -- # set +x 00:07:57.755 ************************************ 00:07:57.755 END TEST event 00:07:57.755 ************************************ 00:07:57.755 18:25:57 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:57.755 18:25:57 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:57.755 18:25:57 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:57.755 18:25:57 -- common/autotest_common.sh@10 -- # set +x 00:07:57.755 ************************************ 00:07:57.755 START TEST thread 00:07:57.755 ************************************ 00:07:57.755 18:25:57 thread -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:58.015 * Looking for test storage... 00:07:58.015 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:58.015 18:25:57 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:58.015 18:25:57 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:07:58.015 18:25:57 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:58.015 18:25:57 thread -- common/autotest_common.sh@10 -- # set +x 00:07:58.015 ************************************ 00:07:58.015 START TEST thread_poller_perf 00:07:58.015 ************************************ 00:07:58.015 18:25:57 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:58.015 [2024-07-23 18:25:57.978410] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:58.015 [2024-07-23 18:25:57.978568] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76314 ] 00:07:58.274 [2024-07-23 18:25:58.124106] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.274 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:58.274 [2024-07-23 18:25:58.198273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.654 ====================================== 00:07:59.654 busy:2298094328 (cyc) 00:07:59.654 total_run_count: 379000 00:07:59.654 tsc_hz: 2290000000 (cyc) 00:07:59.654 ====================================== 00:07:59.654 poller_cost: 6063 (cyc), 2647 (nsec) 00:07:59.654 00:07:59.654 real 0m1.401s 00:07:59.654 user 0m1.193s 00:07:59.654 sys 0m0.101s 00:07:59.654 18:25:59 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:59.654 18:25:59 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:59.654 ************************************ 00:07:59.654 END TEST thread_poller_perf 00:07:59.654 ************************************ 00:07:59.654 18:25:59 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:59.654 18:25:59 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:07:59.654 18:25:59 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:59.654 18:25:59 thread -- common/autotest_common.sh@10 -- # set +x 00:07:59.654 ************************************ 00:07:59.654 START TEST thread_poller_perf 00:07:59.654 ************************************ 00:07:59.654 18:25:59 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:59.654 [2024-07-23 18:25:59.436499] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:59.654 [2024-07-23 18:25:59.436648] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76356 ] 00:07:59.655 [2024-07-23 18:25:59.582084] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.655 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:59.655 [2024-07-23 18:25:59.654698] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.034 ====================================== 00:08:01.034 busy:2293549262 (cyc) 00:08:01.034 total_run_count: 5135000 00:08:01.034 tsc_hz: 2290000000 (cyc) 00:08:01.034 ====================================== 00:08:01.034 poller_cost: 446 (cyc), 194 (nsec) 00:08:01.034 00:08:01.034 real 0m1.386s 00:08:01.034 user 0m1.183s 00:08:01.034 sys 0m0.097s 00:08:01.034 18:26:00 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:01.034 18:26:00 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:01.034 ************************************ 00:08:01.034 END TEST thread_poller_perf 00:08:01.034 ************************************ 00:08:01.034 18:26:00 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:08:01.034 00:08:01.034 real 0m3.034s 00:08:01.034 user 0m2.462s 00:08:01.034 sys 0m0.365s 00:08:01.034 18:26:00 thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:01.034 18:26:00 thread -- common/autotest_common.sh@10 -- # set +x 00:08:01.034 ************************************ 00:08:01.034 END TEST thread 00:08:01.034 ************************************ 00:08:01.034 18:26:00 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:08:01.034 18:26:00 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:01.034 18:26:00 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:01.034 18:26:00 -- common/autotest_common.sh@10 -- # set +x 00:08:01.034 ************************************ 00:08:01.034 START TEST accel 00:08:01.034 ************************************ 00:08:01.034 18:26:00 accel -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:08:01.034 * Looking for test storage... 00:08:01.034 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:08:01.034 18:26:01 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:08:01.034 18:26:01 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:08:01.034 18:26:01 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:01.034 18:26:01 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=76426 00:08:01.034 18:26:01 accel -- accel/accel.sh@63 -- # waitforlisten 76426 00:08:01.034 18:26:01 accel -- common/autotest_common.sh@827 -- # '[' -z 76426 ']' 00:08:01.034 18:26:01 accel -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:01.034 18:26:01 accel -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:01.034 18:26:01 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:08:01.034 18:26:01 accel -- accel/accel.sh@61 -- # build_accel_config 00:08:01.034 18:26:01 accel -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:01.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:01.034 18:26:01 accel -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:01.034 18:26:01 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:01.034 18:26:01 accel -- common/autotest_common.sh@10 -- # set +x 00:08:01.034 18:26:01 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:01.034 18:26:01 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:01.034 18:26:01 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:01.034 18:26:01 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:01.034 18:26:01 accel -- accel/accel.sh@40 -- # local IFS=, 00:08:01.034 18:26:01 accel -- accel/accel.sh@41 -- # jq -r . 00:08:01.292 [2024-07-23 18:26:01.107988] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:08:01.292 [2024-07-23 18:26:01.108118] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76426 ] 00:08:01.292 [2024-07-23 18:26:01.253198] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.292 [2024-07-23 18:26:01.328943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.858 18:26:01 accel -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:01.858 18:26:01 accel -- common/autotest_common.sh@860 -- # return 0 00:08:01.858 18:26:01 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:08:01.858 18:26:01 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:08:01.858 18:26:01 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:08:01.858 18:26:01 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:08:01.858 18:26:01 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:08:01.858 18:26:01 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:08:01.858 18:26:01 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:08:01.858 18:26:01 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:01.858 18:26:01 accel -- common/autotest_common.sh@10 -- # set +x 00:08:01.858 18:26:01 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.118 18:26:01 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:02.118 18:26:01 accel -- accel/accel.sh@72 -- # IFS== 00:08:02.118 18:26:01 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:02.118 18:26:01 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:02.118 18:26:01 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:02.118 18:26:01 accel -- accel/accel.sh@72 -- # IFS== 00:08:02.118 18:26:01 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:02.118 18:26:01 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:02.118 18:26:01 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:02.118 18:26:01 accel -- accel/accel.sh@72 -- # IFS== 00:08:02.118 18:26:01 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:02.118 18:26:01 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:02.118 18:26:01 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:02.118 18:26:01 accel -- accel/accel.sh@72 -- # IFS== 00:08:02.118 18:26:01 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:02.118 18:26:01 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:02.118 18:26:01 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:02.118 18:26:01 accel -- accel/accel.sh@72 -- # IFS== 00:08:02.118 18:26:01 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:02.118 18:26:01 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:02.118 18:26:01 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:02.118 18:26:01 accel -- accel/accel.sh@72 -- # IFS== 00:08:02.118 18:26:01 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:02.118 18:26:01 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:02.118 18:26:01 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:02.118 18:26:01 accel -- accel/accel.sh@72 -- # IFS== 00:08:02.118 18:26:01 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:02.118 18:26:01 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:02.118 18:26:01 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:02.118 18:26:01 accel -- accel/accel.sh@72 -- # IFS== 00:08:02.118 18:26:01 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:02.118 18:26:01 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:02.118 18:26:01 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:02.118 18:26:01 accel -- accel/accel.sh@72 -- # IFS== 00:08:02.118 18:26:01 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:02.118 18:26:01 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:02.118 18:26:01 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:02.118 18:26:01 accel -- accel/accel.sh@72 -- # IFS== 00:08:02.118 18:26:01 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:02.118 18:26:01 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:02.118 18:26:01 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:02.118 18:26:01 accel -- accel/accel.sh@72 -- # IFS== 00:08:02.118 18:26:01 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:02.118 18:26:01 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:02.119 18:26:01 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:02.119 18:26:01 accel -- accel/accel.sh@72 -- # IFS== 00:08:02.119 18:26:01 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:02.119 18:26:01 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:02.119 18:26:01 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:02.119 18:26:01 accel -- accel/accel.sh@72 -- # IFS== 00:08:02.119 18:26:01 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:02.119 18:26:01 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:02.119 18:26:01 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:02.119 18:26:01 accel -- accel/accel.sh@72 -- # IFS== 00:08:02.119 18:26:01 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:02.119 18:26:01 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:02.119 18:26:01 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:02.119 18:26:01 accel -- accel/accel.sh@72 -- # IFS== 00:08:02.119 18:26:01 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:02.119 18:26:01 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:02.119 18:26:01 accel -- accel/accel.sh@75 -- # killprocess 76426 00:08:02.119 18:26:01 accel -- common/autotest_common.sh@946 -- # '[' -z 76426 ']' 00:08:02.119 18:26:01 accel -- common/autotest_common.sh@950 -- # kill -0 76426 00:08:02.119 18:26:01 accel -- common/autotest_common.sh@951 -- # uname 00:08:02.119 18:26:01 accel -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:02.119 18:26:01 accel -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 76426 00:08:02.119 18:26:01 accel -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:02.119 18:26:01 accel -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:02.119 killing process with pid 76426 00:08:02.119 18:26:01 accel -- common/autotest_common.sh@964 -- # echo 'killing process with pid 76426' 00:08:02.119 18:26:01 accel -- common/autotest_common.sh@965 -- # kill 76426 00:08:02.119 18:26:01 accel -- common/autotest_common.sh@970 -- # wait 76426 00:08:02.687 18:26:02 accel -- accel/accel.sh@76 -- # trap - ERR 00:08:02.687 18:26:02 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:08:02.687 18:26:02 accel -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:02.687 18:26:02 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:02.687 18:26:02 accel -- common/autotest_common.sh@10 -- # set +x 00:08:02.687 18:26:02 accel.accel_help -- common/autotest_common.sh@1121 -- # accel_perf -h 00:08:02.687 18:26:02 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:08:02.687 18:26:02 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:08:02.687 18:26:02 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:02.687 18:26:02 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:02.687 18:26:02 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:02.687 18:26:02 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:02.687 18:26:02 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:02.687 18:26:02 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:08:02.687 18:26:02 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:08:02.687 18:26:02 accel.accel_help -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:02.687 18:26:02 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:08:02.687 18:26:02 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:08:02.687 18:26:02 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:08:02.687 18:26:02 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:02.687 18:26:02 accel -- common/autotest_common.sh@10 -- # set +x 00:08:02.687 ************************************ 00:08:02.687 START TEST accel_missing_filename 00:08:02.687 ************************************ 00:08:02.687 18:26:02 accel.accel_missing_filename -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress 00:08:02.687 18:26:02 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:08:02.687 18:26:02 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:08:02.688 18:26:02 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:08:02.688 18:26:02 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:02.688 18:26:02 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:08:02.688 18:26:02 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:02.688 18:26:02 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:08:02.688 18:26:02 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:08:02.688 18:26:02 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:08:02.688 18:26:02 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:02.688 18:26:02 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:02.688 18:26:02 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:02.688 18:26:02 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:02.688 18:26:02 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:02.688 18:26:02 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:08:02.688 18:26:02 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:08:02.946 [2024-07-23 18:26:02.777063] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:08:02.946 [2024-07-23 18:26:02.777199] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76485 ] 00:08:02.946 [2024-07-23 18:26:02.920921] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.946 [2024-07-23 18:26:02.991169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.257 [2024-07-23 18:26:03.069355] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:03.257 [2024-07-23 18:26:03.190549] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:08:03.527 A filename is required. 00:08:03.527 18:26:03 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:08:03.527 18:26:03 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:03.528 18:26:03 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:08:03.528 18:26:03 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:08:03.528 18:26:03 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:08:03.528 18:26:03 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:03.528 00:08:03.528 real 0m0.598s 00:08:03.528 user 0m0.352s 00:08:03.528 sys 0m0.187s 00:08:03.528 18:26:03 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:03.528 18:26:03 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:08:03.528 ************************************ 00:08:03.528 END TEST accel_missing_filename 00:08:03.528 ************************************ 00:08:03.528 18:26:03 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:03.528 18:26:03 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:08:03.528 18:26:03 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:03.528 18:26:03 accel -- common/autotest_common.sh@10 -- # set +x 00:08:03.528 ************************************ 00:08:03.528 START TEST accel_compress_verify 00:08:03.528 ************************************ 00:08:03.528 18:26:03 accel.accel_compress_verify -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:03.528 18:26:03 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:08:03.528 18:26:03 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:03.528 18:26:03 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:08:03.528 18:26:03 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:03.528 18:26:03 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:08:03.528 18:26:03 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:03.528 18:26:03 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:03.528 18:26:03 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:03.528 18:26:03 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:08:03.528 18:26:03 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:03.528 18:26:03 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:03.528 18:26:03 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:03.528 18:26:03 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:03.528 18:26:03 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:03.528 18:26:03 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:08:03.528 18:26:03 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:08:03.528 [2024-07-23 18:26:03.424099] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:08:03.528 [2024-07-23 18:26:03.424252] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76511 ] 00:08:03.528 [2024-07-23 18:26:03.569953] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.837 [2024-07-23 18:26:03.642331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.837 [2024-07-23 18:26:03.720170] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:03.837 [2024-07-23 18:26:03.842445] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:08:04.114 00:08:04.114 Compression does not support the verify option, aborting. 00:08:04.114 18:26:03 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:08:04.114 18:26:03 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:04.114 18:26:03 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:08:04.114 18:26:03 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:08:04.114 18:26:03 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:08:04.114 18:26:03 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:04.114 00:08:04.114 real 0m0.603s 00:08:04.114 user 0m0.364s 00:08:04.114 sys 0m0.181s 00:08:04.114 18:26:03 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:04.114 18:26:03 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:08:04.114 ************************************ 00:08:04.114 END TEST accel_compress_verify 00:08:04.114 ************************************ 00:08:04.114 18:26:04 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:08:04.114 18:26:04 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:08:04.114 18:26:04 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:04.114 18:26:04 accel -- common/autotest_common.sh@10 -- # set +x 00:08:04.114 ************************************ 00:08:04.114 START TEST accel_wrong_workload 00:08:04.114 ************************************ 00:08:04.114 18:26:04 accel.accel_wrong_workload -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w foobar 00:08:04.114 18:26:04 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:08:04.114 18:26:04 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:08:04.114 18:26:04 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:08:04.114 18:26:04 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:04.114 18:26:04 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:08:04.114 18:26:04 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:04.114 18:26:04 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:08:04.114 18:26:04 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:08:04.114 18:26:04 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:08:04.114 18:26:04 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:04.114 18:26:04 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:04.114 18:26:04 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:04.114 18:26:04 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:04.114 18:26:04 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:04.114 18:26:04 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:08:04.114 18:26:04 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:08:04.114 Unsupported workload type: foobar 00:08:04.114 [2024-07-23 18:26:04.078451] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:08:04.114 accel_perf options: 00:08:04.114 [-h help message] 00:08:04.114 [-q queue depth per core] 00:08:04.114 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:08:04.114 [-T number of threads per core 00:08:04.114 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:08:04.114 [-t time in seconds] 00:08:04.114 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:08:04.114 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:08:04.114 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:08:04.114 [-l for compress/decompress workloads, name of uncompressed input file 00:08:04.114 [-S for crc32c workload, use this seed value (default 0) 00:08:04.114 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:08:04.114 [-f for fill workload, use this BYTE value (default 255) 00:08:04.114 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:08:04.114 [-y verify result if this switch is on] 00:08:04.114 [-a tasks to allocate per core (default: same value as -q)] 00:08:04.114 Can be used to spread operations across a wider range of memory. 00:08:04.114 18:26:04 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:08:04.114 18:26:04 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:04.115 18:26:04 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:04.115 18:26:04 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:04.115 00:08:04.115 real 0m0.068s 00:08:04.115 user 0m0.070s 00:08:04.115 sys 0m0.041s 00:08:04.115 18:26:04 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:04.115 18:26:04 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:08:04.115 ************************************ 00:08:04.115 END TEST accel_wrong_workload 00:08:04.115 ************************************ 00:08:04.115 18:26:04 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:08:04.115 18:26:04 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:08:04.115 18:26:04 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:04.115 18:26:04 accel -- common/autotest_common.sh@10 -- # set +x 00:08:04.115 ************************************ 00:08:04.115 START TEST accel_negative_buffers 00:08:04.115 ************************************ 00:08:04.115 18:26:04 accel.accel_negative_buffers -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:08:04.115 18:26:04 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:08:04.115 18:26:04 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:08:04.115 18:26:04 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:08:04.115 18:26:04 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:04.115 18:26:04 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:08:04.115 18:26:04 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:04.115 18:26:04 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:08:04.115 18:26:04 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:08:04.115 18:26:04 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:08:04.115 18:26:04 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:04.115 18:26:04 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:04.115 18:26:04 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:04.115 18:26:04 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:04.115 18:26:04 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:04.115 18:26:04 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:08:04.115 18:26:04 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:08:04.443 -x option must be non-negative. 00:08:04.443 [2024-07-23 18:26:04.192320] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:08:04.443 accel_perf options: 00:08:04.443 [-h help message] 00:08:04.443 [-q queue depth per core] 00:08:04.443 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:08:04.443 [-T number of threads per core 00:08:04.443 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:08:04.443 [-t time in seconds] 00:08:04.443 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:08:04.443 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:08:04.443 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:08:04.443 [-l for compress/decompress workloads, name of uncompressed input file 00:08:04.443 [-S for crc32c workload, use this seed value (default 0) 00:08:04.443 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:08:04.443 [-f for fill workload, use this BYTE value (default 255) 00:08:04.443 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:08:04.443 [-y verify result if this switch is on] 00:08:04.443 [-a tasks to allocate per core (default: same value as -q)] 00:08:04.443 Can be used to spread operations across a wider range of memory. 00:08:04.443 18:26:04 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:08:04.443 18:26:04 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:04.443 18:26:04 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:04.443 18:26:04 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:04.443 00:08:04.443 real 0m0.059s 00:08:04.443 user 0m0.061s 00:08:04.443 sys 0m0.041s 00:08:04.443 18:26:04 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:04.443 18:26:04 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:08:04.443 ************************************ 00:08:04.443 END TEST accel_negative_buffers 00:08:04.443 ************************************ 00:08:04.443 18:26:04 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:08:04.443 18:26:04 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:08:04.443 18:26:04 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:04.443 18:26:04 accel -- common/autotest_common.sh@10 -- # set +x 00:08:04.443 ************************************ 00:08:04.443 START TEST accel_crc32c 00:08:04.443 ************************************ 00:08:04.443 18:26:04 accel.accel_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -S 32 -y 00:08:04.443 18:26:04 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:08:04.443 18:26:04 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:08:04.443 18:26:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:04.443 18:26:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:04.443 18:26:04 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:08:04.443 18:26:04 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:08:04.443 18:26:04 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:08:04.443 18:26:04 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:04.443 18:26:04 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:04.443 18:26:04 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:04.443 18:26:04 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:04.443 18:26:04 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:04.443 18:26:04 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:08:04.443 18:26:04 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:08:04.443 [2024-07-23 18:26:04.285948] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:08:04.443 [2024-07-23 18:26:04.286112] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76578 ] 00:08:04.443 [2024-07-23 18:26:04.433673] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.728 [2024-07-23 18:26:04.509273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.728 18:26:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:04.728 18:26:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:04.728 18:26:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:04.728 18:26:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:04.728 18:26:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:04.728 18:26:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:04.728 18:26:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:04.728 18:26:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:04.728 18:26:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:08:04.728 18:26:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:04.728 18:26:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:04.728 18:26:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:04.728 18:26:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:04.728 18:26:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:04.728 18:26:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:04.728 18:26:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:04.728 18:26:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:04.728 18:26:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:04.728 18:26:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:04.728 18:26:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:04.728 18:26:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:08:04.728 18:26:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:04.728 18:26:04 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:08:04.728 18:26:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:04.728 18:26:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:04.728 18:26:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:08:04.728 18:26:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:04.728 18:26:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:04.728 18:26:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:04.728 18:26:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:04.728 18:26:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:04.728 18:26:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:04.728 18:26:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:04.728 18:26:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:04.728 18:26:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:04.728 18:26:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:04.728 18:26:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:04.728 18:26:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:08:04.728 18:26:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:04.728 18:26:04 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:08:04.728 18:26:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:04.728 18:26:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:04.728 18:26:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:08:04.728 18:26:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:04.728 18:26:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:04.728 18:26:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:04.728 18:26:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:08:04.728 18:26:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:04.728 18:26:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:04.728 18:26:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:04.728 18:26:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:08:04.728 18:26:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:04.728 18:26:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:04.728 18:26:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:04.728 18:26:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:08:04.728 18:26:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:04.728 18:26:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:04.728 18:26:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:04.728 18:26:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:08:04.728 18:26:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:04.728 18:26:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:04.728 18:26:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:04.728 18:26:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:04.728 18:26:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:04.728 18:26:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:04.728 18:26:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:04.728 18:26:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:04.728 18:26:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:04.728 18:26:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:04.728 18:26:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:06.184 18:26:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:06.184 18:26:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:06.184 18:26:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:06.184 18:26:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:06.184 18:26:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:06.184 18:26:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:06.184 18:26:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:06.184 18:26:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:06.184 18:26:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:06.184 18:26:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:06.184 18:26:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:06.184 18:26:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:06.184 18:26:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:06.184 18:26:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:06.184 18:26:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:06.184 18:26:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:06.184 18:26:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:06.184 18:26:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:06.184 18:26:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:06.184 18:26:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:06.184 18:26:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:06.184 18:26:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:06.184 18:26:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:06.184 18:26:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:06.184 18:26:05 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:06.184 18:26:05 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:08:06.184 18:26:05 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:06.184 00:08:06.184 real 0m1.606s 00:08:06.184 user 0m0.019s 00:08:06.184 sys 0m0.000s 00:08:06.184 18:26:05 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:06.184 18:26:05 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:08:06.184 ************************************ 00:08:06.184 END TEST accel_crc32c 00:08:06.184 ************************************ 00:08:06.184 18:26:05 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:08:06.184 18:26:05 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:08:06.184 18:26:05 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:06.184 18:26:05 accel -- common/autotest_common.sh@10 -- # set +x 00:08:06.185 ************************************ 00:08:06.185 START TEST accel_crc32c_C2 00:08:06.185 ************************************ 00:08:06.185 18:26:05 accel.accel_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -y -C 2 00:08:06.185 18:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:08:06.185 18:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:08:06.185 18:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:06.185 18:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:06.185 18:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:08:06.185 18:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:08:06.185 18:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:08:06.185 18:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:06.185 18:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:06.185 18:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:06.185 18:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:06.185 18:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:06.185 18:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:08:06.185 18:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:08:06.185 [2024-07-23 18:26:05.956186] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:08:06.185 [2024-07-23 18:26:05.956351] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76613 ] 00:08:06.185 [2024-07-23 18:26:06.091695] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.185 [2024-07-23 18:26:06.163870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.444 18:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:06.444 18:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.444 18:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:06.444 18:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:06.444 18:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:06.444 18:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.444 18:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:06.444 18:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:06.444 18:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:08:06.444 18:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.444 18:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:06.444 18:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:06.444 18:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:06.444 18:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.444 18:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:06.444 18:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:06.444 18:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:06.444 18:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.444 18:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:06.444 18:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:06.444 18:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:08:06.444 18:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.444 18:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:08:06.444 18:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:06.444 18:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:06.444 18:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:08:06.444 18:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.444 18:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:06.444 18:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:06.444 18:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:06.444 18:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.444 18:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:06.444 18:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:06.444 18:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:06.444 18:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.444 18:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:06.444 18:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:06.444 18:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:08:06.444 18:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.444 18:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:08:06.444 18:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:06.444 18:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:06.444 18:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:06.444 18:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.444 18:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:06.444 18:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:06.444 18:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:06.444 18:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.444 18:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:06.444 18:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:06.444 18:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:08:06.444 18:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.444 18:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:06.444 18:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:06.444 18:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:06.444 18:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.444 18:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:06.444 18:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:06.444 18:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:08:06.444 18:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.444 18:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:06.444 18:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:06.444 18:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:06.444 18:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.444 18:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:06.444 18:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:06.444 18:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:06.444 18:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.444 18:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:06.444 18:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:07.825 18:26:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:07.825 18:26:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:07.825 18:26:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:07.825 18:26:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:07.825 18:26:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:07.825 18:26:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:07.825 18:26:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:07.825 18:26:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:07.825 18:26:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:07.825 18:26:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:07.825 18:26:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:07.825 18:26:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:07.825 18:26:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:07.825 18:26:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:07.825 18:26:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:07.825 18:26:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:07.825 18:26:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:07.825 18:26:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:07.825 18:26:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:07.825 18:26:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:07.825 18:26:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:07.825 18:26:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:07.825 18:26:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:07.825 18:26:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:07.825 18:26:07 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:07.825 18:26:07 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:08:07.825 18:26:07 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:07.825 00:08:07.825 real 0m1.596s 00:08:07.825 user 0m1.316s 00:08:07.825 sys 0m0.197s 00:08:07.825 18:26:07 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:07.825 18:26:07 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:08:07.825 ************************************ 00:08:07.825 END TEST accel_crc32c_C2 00:08:07.825 ************************************ 00:08:07.825 18:26:07 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:08:07.825 18:26:07 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:08:07.825 18:26:07 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:07.825 18:26:07 accel -- common/autotest_common.sh@10 -- # set +x 00:08:07.825 ************************************ 00:08:07.825 START TEST accel_copy 00:08:07.825 ************************************ 00:08:07.825 18:26:07 accel.accel_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy -y 00:08:07.825 18:26:07 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:08:07.825 18:26:07 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:08:07.825 18:26:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:07.825 18:26:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:07.825 18:26:07 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:08:07.825 18:26:07 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:08:07.825 18:26:07 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:08:07.825 18:26:07 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:07.825 18:26:07 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:07.825 18:26:07 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:07.825 18:26:07 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:07.825 18:26:07 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:07.825 18:26:07 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:08:07.825 18:26:07 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:08:07.825 [2024-07-23 18:26:07.613368] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:08:07.825 [2024-07-23 18:26:07.613527] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76649 ] 00:08:07.825 [2024-07-23 18:26:07.760503] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.825 [2024-07-23 18:26:07.835008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.085 18:26:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:08.085 18:26:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:08.085 18:26:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:08.085 18:26:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:08.085 18:26:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:08.085 18:26:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:08.085 18:26:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:08.085 18:26:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:08.085 18:26:07 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:08:08.085 18:26:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:08.085 18:26:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:08.086 18:26:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:08.086 18:26:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:08.086 18:26:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:08.086 18:26:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:08.086 18:26:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:08.086 18:26:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:08.086 18:26:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:08.086 18:26:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:08.086 18:26:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:08.086 18:26:07 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:08:08.086 18:26:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:08.086 18:26:07 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:08:08.086 18:26:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:08.086 18:26:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:08.086 18:26:07 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:08.086 18:26:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:08.086 18:26:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:08.086 18:26:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:08.086 18:26:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:08.086 18:26:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:08.086 18:26:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:08.086 18:26:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:08.086 18:26:07 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:08:08.086 18:26:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:08.086 18:26:07 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:08:08.086 18:26:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:08.086 18:26:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:08.086 18:26:07 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:08:08.086 18:26:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:08.086 18:26:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:08.086 18:26:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:08.086 18:26:07 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:08:08.086 18:26:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:08.086 18:26:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:08.086 18:26:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:08.086 18:26:07 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:08:08.086 18:26:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:08.086 18:26:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:08.086 18:26:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:08.086 18:26:07 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:08:08.086 18:26:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:08.086 18:26:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:08.086 18:26:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:08.086 18:26:07 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:08:08.086 18:26:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:08.086 18:26:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:08.086 18:26:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:08.086 18:26:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:08.086 18:26:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:08.086 18:26:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:08.086 18:26:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:08.086 18:26:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:08.086 18:26:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:08.086 18:26:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:08.086 18:26:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:09.468 18:26:09 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:09.468 18:26:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:09.468 18:26:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:09.468 18:26:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:09.468 18:26:09 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:09.468 18:26:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:09.468 18:26:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:09.468 18:26:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:09.468 18:26:09 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:09.468 18:26:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:09.468 18:26:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:09.468 18:26:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:09.468 18:26:09 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:09.468 18:26:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:09.468 18:26:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:09.468 18:26:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:09.468 18:26:09 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:09.468 18:26:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:09.468 18:26:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:09.468 18:26:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:09.468 18:26:09 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:09.468 18:26:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:09.468 18:26:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:09.468 18:26:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:09.468 18:26:09 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:09.468 18:26:09 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:08:09.468 18:26:09 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:09.468 00:08:09.468 real 0m1.611s 00:08:09.468 user 0m0.020s 00:08:09.468 sys 0m0.004s 00:08:09.468 18:26:09 accel.accel_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:09.468 ************************************ 00:08:09.468 END TEST accel_copy 00:08:09.468 ************************************ 00:08:09.468 18:26:09 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:08:09.468 18:26:09 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:09.468 18:26:09 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:08:09.468 18:26:09 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:09.468 18:26:09 accel -- common/autotest_common.sh@10 -- # set +x 00:08:09.468 ************************************ 00:08:09.468 START TEST accel_fill 00:08:09.468 ************************************ 00:08:09.468 18:26:09 accel.accel_fill -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:09.468 18:26:09 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:08:09.468 18:26:09 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:08:09.468 18:26:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:09.468 18:26:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:09.468 18:26:09 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:09.468 18:26:09 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:09.468 18:26:09 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:08:09.468 18:26:09 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:09.468 18:26:09 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:09.468 18:26:09 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:09.468 18:26:09 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:09.468 18:26:09 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:09.468 18:26:09 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:08:09.468 18:26:09 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:08:09.468 [2024-07-23 18:26:09.290202] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:08:09.468 [2024-07-23 18:26:09.290469] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76684 ] 00:08:09.468 [2024-07-23 18:26:09.437627] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.468 [2024-07-23 18:26:09.513662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.728 18:26:09 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:09.728 18:26:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:09.728 18:26:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:09.728 18:26:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:09.728 18:26:09 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:09.728 18:26:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:09.728 18:26:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:09.728 18:26:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:09.728 18:26:09 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:08:09.728 18:26:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:09.728 18:26:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:09.728 18:26:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:09.728 18:26:09 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:09.728 18:26:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:09.728 18:26:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:09.728 18:26:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:09.728 18:26:09 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:09.728 18:26:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:09.728 18:26:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:09.728 18:26:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:09.728 18:26:09 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:08:09.728 18:26:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:09.728 18:26:09 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:08:09.728 18:26:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:09.728 18:26:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:09.728 18:26:09 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:08:09.728 18:26:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:09.728 18:26:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:09.728 18:26:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:09.728 18:26:09 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:09.728 18:26:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:09.728 18:26:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:09.728 18:26:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:09.728 18:26:09 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:09.728 18:26:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:09.728 18:26:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:09.728 18:26:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:09.728 18:26:09 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:08:09.728 18:26:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:09.728 18:26:09 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:08:09.728 18:26:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:09.728 18:26:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:09.728 18:26:09 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:08:09.728 18:26:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:09.728 18:26:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:09.728 18:26:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:09.728 18:26:09 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:08:09.728 18:26:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:09.728 18:26:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:09.728 18:26:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:09.728 18:26:09 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:08:09.728 18:26:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:09.728 18:26:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:09.728 18:26:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:09.728 18:26:09 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:08:09.728 18:26:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:09.728 18:26:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:09.728 18:26:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:09.728 18:26:09 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:08:09.728 18:26:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:09.728 18:26:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:09.728 18:26:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:09.728 18:26:09 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:09.728 18:26:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:09.728 18:26:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:09.728 18:26:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:09.728 18:26:09 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:09.728 18:26:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:09.728 18:26:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:09.728 18:26:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:11.110 18:26:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:11.110 18:26:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:11.110 18:26:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:11.110 18:26:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:11.110 18:26:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:11.110 18:26:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:11.110 18:26:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:11.110 18:26:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:11.110 18:26:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:11.110 18:26:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:11.110 18:26:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:11.110 18:26:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:11.110 18:26:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:11.110 18:26:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:11.110 18:26:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:11.110 18:26:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:11.110 18:26:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:11.110 18:26:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:11.110 18:26:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:11.110 18:26:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:11.110 18:26:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:11.110 18:26:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:11.110 18:26:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:11.110 18:26:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:11.110 18:26:10 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:11.110 18:26:10 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:08:11.110 ************************************ 00:08:11.110 END TEST accel_fill 00:08:11.110 ************************************ 00:08:11.110 18:26:10 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:11.110 00:08:11.110 real 0m1.618s 00:08:11.110 user 0m1.337s 00:08:11.110 sys 0m0.195s 00:08:11.110 18:26:10 accel.accel_fill -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:11.110 18:26:10 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:08:11.110 18:26:10 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:08:11.110 18:26:10 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:08:11.110 18:26:10 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:11.110 18:26:10 accel -- common/autotest_common.sh@10 -- # set +x 00:08:11.110 ************************************ 00:08:11.110 START TEST accel_copy_crc32c 00:08:11.110 ************************************ 00:08:11.110 18:26:10 accel.accel_copy_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y 00:08:11.110 18:26:10 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:08:11.110 18:26:10 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:08:11.110 18:26:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:11.110 18:26:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:11.110 18:26:10 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:08:11.110 18:26:10 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:08:11.110 18:26:10 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:08:11.110 18:26:10 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:11.110 18:26:10 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:11.110 18:26:10 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:11.110 18:26:10 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:11.110 18:26:10 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:11.110 18:26:10 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:08:11.110 18:26:10 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:08:11.110 [2024-07-23 18:26:10.964932] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:08:11.110 [2024-07-23 18:26:10.965072] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76724 ] 00:08:11.110 [2024-07-23 18:26:11.110914] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.370 [2024-07-23 18:26:11.188334] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.370 18:26:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:11.370 18:26:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:11.370 18:26:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:11.370 18:26:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:11.370 18:26:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:11.370 18:26:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:11.370 18:26:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:11.371 18:26:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:11.371 18:26:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:08:11.371 18:26:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:11.371 18:26:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:11.371 18:26:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:11.371 18:26:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:11.371 18:26:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:11.371 18:26:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:11.371 18:26:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:11.371 18:26:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:11.371 18:26:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:11.371 18:26:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:11.371 18:26:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:11.371 18:26:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:08:11.371 18:26:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:11.371 18:26:11 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:08:11.371 18:26:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:11.371 18:26:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:11.371 18:26:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:08:11.371 18:26:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:11.371 18:26:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:11.371 18:26:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:11.371 18:26:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:11.371 18:26:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:11.371 18:26:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:11.371 18:26:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:11.371 18:26:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:11.371 18:26:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:11.371 18:26:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:11.371 18:26:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:11.371 18:26:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:11.371 18:26:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:11.371 18:26:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:11.371 18:26:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:11.371 18:26:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:08:11.371 18:26:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:11.371 18:26:11 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:08:11.371 18:26:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:11.371 18:26:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:11.371 18:26:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:08:11.371 18:26:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:11.371 18:26:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:11.371 18:26:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:11.371 18:26:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:08:11.371 18:26:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:11.371 18:26:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:11.371 18:26:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:11.371 18:26:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:08:11.371 18:26:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:11.371 18:26:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:11.371 18:26:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:11.371 18:26:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:08:11.371 18:26:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:11.371 18:26:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:11.371 18:26:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:11.371 18:26:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:08:11.371 18:26:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:11.371 18:26:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:11.371 18:26:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:11.371 18:26:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:11.371 18:26:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:11.371 18:26:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:11.371 18:26:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:11.371 18:26:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:11.371 18:26:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:11.371 18:26:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:11.371 18:26:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:12.751 18:26:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:12.751 18:26:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:12.751 18:26:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:12.751 18:26:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:12.751 18:26:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:12.751 18:26:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:12.751 18:26:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:12.751 18:26:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:12.751 18:26:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:12.751 18:26:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:12.751 18:26:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:12.751 18:26:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:12.751 18:26:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:12.751 18:26:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:12.751 18:26:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:12.751 18:26:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:12.751 18:26:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:12.751 18:26:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:12.751 18:26:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:12.751 18:26:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:12.751 18:26:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:12.751 18:26:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:12.751 18:26:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:12.751 18:26:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:12.751 18:26:12 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:12.751 18:26:12 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:08:12.751 18:26:12 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:12.751 ************************************ 00:08:12.751 END TEST accel_copy_crc32c 00:08:12.751 ************************************ 00:08:12.751 00:08:12.751 real 0m1.614s 00:08:12.751 user 0m1.347s 00:08:12.751 sys 0m0.184s 00:08:12.751 18:26:12 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:12.751 18:26:12 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:08:12.751 18:26:12 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:08:12.751 18:26:12 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:08:12.751 18:26:12 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:12.751 18:26:12 accel -- common/autotest_common.sh@10 -- # set +x 00:08:12.751 ************************************ 00:08:12.751 START TEST accel_copy_crc32c_C2 00:08:12.751 ************************************ 00:08:12.751 18:26:12 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:08:12.751 18:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:08:12.751 18:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:08:12.751 18:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:12.751 18:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:08:12.751 18:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:12.751 18:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:08:12.751 18:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:08:12.751 18:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:12.751 18:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:12.751 18:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:12.751 18:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:12.751 18:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:12.751 18:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:08:12.751 18:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:08:12.751 [2024-07-23 18:26:12.647158] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:08:12.751 [2024-07-23 18:26:12.647322] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76761 ] 00:08:12.751 [2024-07-23 18:26:12.796405] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.010 [2024-07-23 18:26:12.875118] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.010 18:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:13.010 18:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.010 18:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.010 18:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.010 18:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:13.010 18:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.010 18:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.010 18:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.010 18:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:08:13.010 18:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.010 18:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.010 18:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.010 18:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:13.010 18:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.010 18:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.011 18:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.011 18:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:13.011 18:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.011 18:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.011 18:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.011 18:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:08:13.011 18:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.011 18:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:08:13.011 18:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.011 18:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.011 18:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:08:13.011 18:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.011 18:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.011 18:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.011 18:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:13.011 18:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.011 18:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.011 18:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.011 18:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:08:13.011 18:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.011 18:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.011 18:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.011 18:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:13.011 18:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.011 18:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.011 18:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.011 18:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:08:13.011 18:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.011 18:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:08:13.011 18:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.011 18:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.011 18:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:13.011 18:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.011 18:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.011 18:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.011 18:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:13.011 18:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.011 18:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.011 18:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.011 18:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:08:13.011 18:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.011 18:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.011 18:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.011 18:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:13.011 18:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.011 18:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.011 18:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.011 18:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:08:13.011 18:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.011 18:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.011 18:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.011 18:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:13.011 18:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.011 18:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.011 18:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.011 18:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:13.011 18:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.011 18:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.011 18:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:14.391 18:26:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:14.391 18:26:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.391 18:26:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:14.391 18:26:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:14.391 18:26:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:14.391 18:26:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.391 18:26:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:14.391 18:26:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:14.391 18:26:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:14.391 18:26:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.391 18:26:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:14.391 18:26:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:14.391 18:26:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:14.391 18:26:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.391 18:26:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:14.391 18:26:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:14.391 18:26:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:14.391 18:26:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.391 18:26:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:14.391 18:26:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:14.391 18:26:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:14.391 18:26:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.391 18:26:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:14.391 18:26:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:14.391 18:26:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:14.391 18:26:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:08:14.391 18:26:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:14.391 00:08:14.391 real 0m1.621s 00:08:14.391 user 0m1.337s 00:08:14.391 sys 0m0.201s 00:08:14.391 18:26:14 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:14.391 18:26:14 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:08:14.391 ************************************ 00:08:14.391 END TEST accel_copy_crc32c_C2 00:08:14.391 ************************************ 00:08:14.391 18:26:14 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:08:14.391 18:26:14 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:08:14.391 18:26:14 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:14.391 18:26:14 accel -- common/autotest_common.sh@10 -- # set +x 00:08:14.391 ************************************ 00:08:14.391 START TEST accel_dualcast 00:08:14.391 ************************************ 00:08:14.391 18:26:14 accel.accel_dualcast -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dualcast -y 00:08:14.391 18:26:14 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:08:14.391 18:26:14 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:08:14.391 18:26:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:14.391 18:26:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:14.391 18:26:14 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:08:14.391 18:26:14 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:08:14.392 18:26:14 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:08:14.392 18:26:14 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:14.392 18:26:14 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:14.392 18:26:14 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:14.392 18:26:14 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:14.392 18:26:14 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:14.392 18:26:14 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:08:14.392 18:26:14 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:08:14.392 [2024-07-23 18:26:14.327412] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:08:14.392 [2024-07-23 18:26:14.327653] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76796 ] 00:08:14.652 [2024-07-23 18:26:14.474822] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.652 [2024-07-23 18:26:14.550836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.652 18:26:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:14.652 18:26:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:14.652 18:26:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:14.652 18:26:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:14.652 18:26:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:14.652 18:26:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:14.652 18:26:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:14.652 18:26:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:14.652 18:26:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:08:14.652 18:26:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:14.652 18:26:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:14.652 18:26:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:14.652 18:26:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:14.652 18:26:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:14.652 18:26:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:14.652 18:26:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:14.652 18:26:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:14.652 18:26:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:14.652 18:26:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:14.652 18:26:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:14.652 18:26:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:08:14.652 18:26:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:14.652 18:26:14 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:08:14.652 18:26:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:14.652 18:26:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:14.652 18:26:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:14.652 18:26:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:14.652 18:26:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:14.652 18:26:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:14.652 18:26:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:14.652 18:26:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:14.652 18:26:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:14.652 18:26:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:14.652 18:26:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:08:14.652 18:26:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:14.652 18:26:14 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:08:14.652 18:26:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:14.652 18:26:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:14.652 18:26:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:08:14.652 18:26:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:14.652 18:26:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:14.652 18:26:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:14.652 18:26:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:08:14.652 18:26:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:14.652 18:26:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:14.652 18:26:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:14.652 18:26:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:08:14.652 18:26:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:14.652 18:26:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:14.652 18:26:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:14.652 18:26:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:08:14.652 18:26:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:14.652 18:26:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:14.652 18:26:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:14.652 18:26:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:08:14.652 18:26:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:14.652 18:26:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:14.652 18:26:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:14.652 18:26:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:14.652 18:26:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:14.652 18:26:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:14.652 18:26:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:14.652 18:26:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:14.652 18:26:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:14.652 18:26:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:14.652 18:26:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:16.030 18:26:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:16.030 18:26:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:16.030 18:26:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:16.030 18:26:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:16.030 18:26:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:16.030 18:26:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:16.030 18:26:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:16.030 18:26:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:16.030 18:26:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:16.030 18:26:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:16.030 18:26:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:16.030 18:26:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:16.030 18:26:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:16.030 18:26:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:16.030 18:26:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:16.030 18:26:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:16.030 18:26:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:16.030 18:26:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:16.030 18:26:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:16.030 18:26:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:16.030 18:26:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:16.030 18:26:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:16.030 18:26:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:16.030 18:26:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:16.030 18:26:15 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:16.030 18:26:15 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:08:16.030 18:26:15 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:16.030 00:08:16.030 real 0m1.613s 00:08:16.030 user 0m0.023s 00:08:16.030 sys 0m0.004s 00:08:16.030 18:26:15 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:16.030 18:26:15 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:08:16.030 ************************************ 00:08:16.030 END TEST accel_dualcast 00:08:16.030 ************************************ 00:08:16.030 18:26:15 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:08:16.030 18:26:15 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:08:16.030 18:26:15 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:16.030 18:26:15 accel -- common/autotest_common.sh@10 -- # set +x 00:08:16.031 ************************************ 00:08:16.031 START TEST accel_compare 00:08:16.031 ************************************ 00:08:16.031 18:26:15 accel.accel_compare -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compare -y 00:08:16.031 18:26:15 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:08:16.031 18:26:15 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:08:16.031 18:26:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:16.031 18:26:15 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:08:16.031 18:26:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:16.031 18:26:15 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:08:16.031 18:26:15 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:08:16.031 18:26:15 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:16.031 18:26:15 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:16.031 18:26:15 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:16.031 18:26:15 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:16.031 18:26:15 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:16.031 18:26:15 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:08:16.031 18:26:15 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:08:16.031 [2024-07-23 18:26:16.004987] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:08:16.031 [2024-07-23 18:26:16.005133] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76832 ] 00:08:16.290 [2024-07-23 18:26:16.150394] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.290 [2024-07-23 18:26:16.227366] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.290 18:26:16 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:16.290 18:26:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:16.290 18:26:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:16.290 18:26:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:16.290 18:26:16 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:16.290 18:26:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:16.290 18:26:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:16.290 18:26:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:16.290 18:26:16 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:08:16.290 18:26:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:16.290 18:26:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:16.290 18:26:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:16.290 18:26:16 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:16.290 18:26:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:16.290 18:26:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:16.290 18:26:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:16.290 18:26:16 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:16.290 18:26:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:16.290 18:26:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:16.290 18:26:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:16.290 18:26:16 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:08:16.290 18:26:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:16.290 18:26:16 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:08:16.290 18:26:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:16.290 18:26:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:16.290 18:26:16 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:16.290 18:26:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:16.290 18:26:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:16.290 18:26:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:16.290 18:26:16 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:16.290 18:26:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:16.290 18:26:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:16.291 18:26:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:16.291 18:26:16 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:08:16.291 18:26:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:16.291 18:26:16 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:08:16.291 18:26:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:16.291 18:26:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:16.291 18:26:16 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:08:16.291 18:26:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:16.291 18:26:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:16.291 18:26:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:16.291 18:26:16 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:08:16.291 18:26:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:16.291 18:26:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:16.291 18:26:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:16.291 18:26:16 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:08:16.291 18:26:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:16.291 18:26:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:16.291 18:26:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:16.291 18:26:16 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:08:16.291 18:26:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:16.291 18:26:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:16.291 18:26:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:16.291 18:26:16 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:08:16.291 18:26:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:16.291 18:26:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:16.291 18:26:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:16.291 18:26:16 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:16.291 18:26:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:16.291 18:26:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:16.291 18:26:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:16.291 18:26:16 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:16.291 18:26:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:16.291 18:26:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:16.291 18:26:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:17.669 18:26:17 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:17.669 18:26:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:17.669 18:26:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:17.669 18:26:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:17.669 18:26:17 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:17.669 18:26:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:17.669 18:26:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:17.669 18:26:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:17.669 18:26:17 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:17.669 18:26:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:17.669 18:26:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:17.669 18:26:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:17.669 18:26:17 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:17.669 18:26:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:17.669 18:26:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:17.669 18:26:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:17.669 18:26:17 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:17.669 18:26:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:17.669 18:26:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:17.669 18:26:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:17.669 18:26:17 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:17.669 18:26:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:17.669 18:26:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:17.669 18:26:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:17.669 18:26:17 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:17.669 18:26:17 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:08:17.669 18:26:17 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:17.669 00:08:17.669 real 0m1.612s 00:08:17.669 user 0m1.337s 00:08:17.669 sys 0m0.191s 00:08:17.669 18:26:17 accel.accel_compare -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:17.669 18:26:17 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:08:17.669 ************************************ 00:08:17.669 END TEST accel_compare 00:08:17.669 ************************************ 00:08:17.669 18:26:17 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:08:17.669 18:26:17 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:08:17.669 18:26:17 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:17.669 18:26:17 accel -- common/autotest_common.sh@10 -- # set +x 00:08:17.669 ************************************ 00:08:17.669 START TEST accel_xor 00:08:17.669 ************************************ 00:08:17.669 18:26:17 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y 00:08:17.669 18:26:17 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:08:17.669 18:26:17 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:08:17.669 18:26:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:17.669 18:26:17 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:08:17.669 18:26:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:17.669 18:26:17 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:08:17.669 18:26:17 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:08:17.669 18:26:17 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:17.669 18:26:17 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:17.669 18:26:17 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:17.669 18:26:17 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:17.669 18:26:17 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:17.669 18:26:17 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:08:17.669 18:26:17 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:08:17.669 [2024-07-23 18:26:17.681856] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:08:17.669 [2024-07-23 18:26:17.682069] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76873 ] 00:08:17.929 [2024-07-23 18:26:17.829427] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.929 [2024-07-23 18:26:17.904600] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.188 18:26:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:18.188 18:26:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:18.188 18:26:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:18.188 18:26:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:18.188 18:26:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:18.188 18:26:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:18.188 18:26:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:18.188 18:26:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:18.188 18:26:17 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:08:18.188 18:26:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:18.188 18:26:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:18.188 18:26:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:18.188 18:26:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:18.188 18:26:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:18.188 18:26:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:18.188 18:26:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:18.188 18:26:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:18.188 18:26:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:18.188 18:26:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:18.188 18:26:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:18.188 18:26:17 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:08:18.188 18:26:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:18.188 18:26:17 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:08:18.188 18:26:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:18.188 18:26:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:18.188 18:26:17 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:08:18.188 18:26:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:18.188 18:26:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:18.188 18:26:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:18.188 18:26:17 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:18.188 18:26:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:18.188 18:26:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:18.188 18:26:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:18.188 18:26:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:18.188 18:26:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:18.188 18:26:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:18.188 18:26:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:18.188 18:26:17 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:08:18.188 18:26:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:18.188 18:26:17 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:08:18.188 18:26:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:18.188 18:26:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:18.188 18:26:17 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:18.188 18:26:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:18.188 18:26:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:18.188 18:26:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:18.188 18:26:17 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:18.188 18:26:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:18.188 18:26:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:18.188 18:26:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:18.188 18:26:17 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:08:18.188 18:26:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:18.188 18:26:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:18.188 18:26:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:18.188 18:26:17 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:08:18.188 18:26:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:18.188 18:26:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:18.188 18:26:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:18.188 18:26:17 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:08:18.188 18:26:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:18.188 18:26:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:18.188 18:26:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:18.188 18:26:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:18.188 18:26:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:18.188 18:26:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:18.188 18:26:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:18.188 18:26:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:18.188 18:26:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:18.188 18:26:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:18.188 18:26:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:19.566 18:26:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:19.566 18:26:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:19.566 18:26:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:19.566 18:26:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:19.566 18:26:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:19.566 18:26:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:19.566 18:26:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:19.566 18:26:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:19.566 18:26:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:19.566 18:26:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:19.566 18:26:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:19.566 18:26:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:19.566 18:26:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:19.566 18:26:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:19.566 18:26:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:19.566 18:26:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:19.566 18:26:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:19.566 18:26:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:19.566 18:26:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:19.566 18:26:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:19.566 18:26:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:19.566 18:26:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:19.566 18:26:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:19.566 18:26:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:19.566 18:26:19 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:19.566 18:26:19 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:08:19.566 18:26:19 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:19.566 00:08:19.566 real 0m1.607s 00:08:19.566 user 0m1.334s 00:08:19.566 sys 0m0.187s 00:08:19.566 18:26:19 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:19.566 18:26:19 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:08:19.566 ************************************ 00:08:19.566 END TEST accel_xor 00:08:19.566 ************************************ 00:08:19.566 18:26:19 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:08:19.566 18:26:19 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:08:19.566 18:26:19 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:19.566 18:26:19 accel -- common/autotest_common.sh@10 -- # set +x 00:08:19.566 ************************************ 00:08:19.566 START TEST accel_xor 00:08:19.566 ************************************ 00:08:19.566 18:26:19 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y -x 3 00:08:19.566 18:26:19 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:08:19.566 18:26:19 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:08:19.566 18:26:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:19.566 18:26:19 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:08:19.566 18:26:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:19.566 18:26:19 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:08:19.566 18:26:19 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:08:19.566 18:26:19 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:19.566 18:26:19 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:19.566 18:26:19 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:19.566 18:26:19 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:19.566 18:26:19 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:19.566 18:26:19 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:08:19.566 18:26:19 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:08:19.566 [2024-07-23 18:26:19.356632] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:08:19.566 [2024-07-23 18:26:19.356863] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76903 ] 00:08:19.566 [2024-07-23 18:26:19.503732] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.566 [2024-07-23 18:26:19.577992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.825 18:26:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:19.825 18:26:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:19.825 18:26:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:19.825 18:26:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:19.825 18:26:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:19.825 18:26:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:19.825 18:26:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:19.825 18:26:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:19.825 18:26:19 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:08:19.825 18:26:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:19.825 18:26:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:19.825 18:26:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:19.825 18:26:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:19.825 18:26:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:19.825 18:26:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:19.825 18:26:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:19.825 18:26:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:19.825 18:26:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:19.825 18:26:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:19.825 18:26:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:19.825 18:26:19 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:08:19.825 18:26:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:19.825 18:26:19 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:08:19.825 18:26:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:19.825 18:26:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:19.825 18:26:19 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:08:19.825 18:26:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:19.825 18:26:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:19.825 18:26:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:19.825 18:26:19 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:19.825 18:26:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:19.825 18:26:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:19.825 18:26:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:19.825 18:26:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:19.825 18:26:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:19.825 18:26:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:19.825 18:26:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:19.825 18:26:19 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:08:19.825 18:26:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:19.825 18:26:19 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:08:19.825 18:26:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:19.825 18:26:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:19.825 18:26:19 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:19.825 18:26:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:19.825 18:26:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:19.825 18:26:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:19.825 18:26:19 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:19.825 18:26:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:19.825 18:26:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:19.825 18:26:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:19.825 18:26:19 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:08:19.825 18:26:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:19.825 18:26:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:19.825 18:26:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:19.825 18:26:19 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:08:19.825 18:26:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:19.825 18:26:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:19.825 18:26:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:19.825 18:26:19 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:08:19.825 18:26:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:19.825 18:26:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:19.825 18:26:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:19.825 18:26:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:19.825 18:26:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:19.825 18:26:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:19.825 18:26:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:19.825 18:26:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:19.825 18:26:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:19.825 18:26:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:19.825 18:26:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:21.206 18:26:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:21.206 18:26:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:21.206 18:26:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:21.206 18:26:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:21.206 18:26:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:21.206 18:26:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:21.206 18:26:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:21.206 18:26:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:21.206 18:26:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:21.206 18:26:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:21.206 18:26:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:21.206 18:26:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:21.206 18:26:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:21.206 18:26:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:21.206 18:26:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:21.206 18:26:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:21.206 18:26:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:21.206 18:26:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:21.206 18:26:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:21.206 18:26:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:21.206 18:26:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:21.206 18:26:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:21.206 18:26:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:21.206 18:26:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:21.206 18:26:20 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:21.206 18:26:20 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:08:21.206 18:26:20 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:21.206 00:08:21.206 real 0m1.611s 00:08:21.206 user 0m1.324s 00:08:21.206 sys 0m0.202s 00:08:21.206 ************************************ 00:08:21.206 END TEST accel_xor 00:08:21.206 ************************************ 00:08:21.206 18:26:20 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:21.206 18:26:20 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:08:21.206 18:26:20 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:08:21.206 18:26:20 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:08:21.206 18:26:20 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:21.206 18:26:20 accel -- common/autotest_common.sh@10 -- # set +x 00:08:21.206 ************************************ 00:08:21.206 START TEST accel_dif_verify 00:08:21.206 ************************************ 00:08:21.206 18:26:20 accel.accel_dif_verify -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_verify 00:08:21.206 18:26:20 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:08:21.206 18:26:20 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:08:21.206 18:26:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:21.206 18:26:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:21.206 18:26:20 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:08:21.206 18:26:20 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:08:21.206 18:26:20 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:08:21.206 18:26:20 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:21.206 18:26:20 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:21.206 18:26:20 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:21.206 18:26:20 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:21.206 18:26:20 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:21.206 18:26:20 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:08:21.206 18:26:20 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:08:21.206 [2024-07-23 18:26:21.029893] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:08:21.206 [2024-07-23 18:26:21.030058] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76944 ] 00:08:21.206 [2024-07-23 18:26:21.178015] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.206 [2024-07-23 18:26:21.255841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.466 18:26:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:21.466 18:26:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:21.466 18:26:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:21.466 18:26:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:21.466 18:26:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:21.466 18:26:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:21.466 18:26:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:21.466 18:26:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:21.466 18:26:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:08:21.466 18:26:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:21.466 18:26:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:21.466 18:26:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:21.466 18:26:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:21.466 18:26:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:21.466 18:26:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:21.466 18:26:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:21.466 18:26:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:21.466 18:26:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:21.466 18:26:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:21.466 18:26:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:21.466 18:26:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:08:21.466 18:26:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:21.466 18:26:21 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:08:21.466 18:26:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:21.466 18:26:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:21.466 18:26:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:21.466 18:26:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:21.466 18:26:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:21.466 18:26:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:21.466 18:26:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:21.466 18:26:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:21.466 18:26:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:21.466 18:26:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:21.466 18:26:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:08:21.466 18:26:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:21.466 18:26:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:21.466 18:26:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:21.466 18:26:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:08:21.466 18:26:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:21.466 18:26:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:21.466 18:26:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:21.466 18:26:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:21.466 18:26:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:21.466 18:26:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:21.466 18:26:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:21.466 18:26:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:08:21.466 18:26:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:21.466 18:26:21 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:08:21.466 18:26:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:21.466 18:26:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:21.466 18:26:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:08:21.466 18:26:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:21.466 18:26:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:21.466 18:26:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:21.466 18:26:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:08:21.466 18:26:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:21.466 18:26:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:21.466 18:26:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:21.466 18:26:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:08:21.466 18:26:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:21.466 18:26:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:21.466 18:26:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:21.466 18:26:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:08:21.466 18:26:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:21.466 18:26:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:21.466 18:26:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:21.466 18:26:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:08:21.466 18:26:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:21.466 18:26:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:21.466 18:26:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:21.466 18:26:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:21.466 18:26:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:21.466 18:26:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:21.466 18:26:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:21.466 18:26:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:21.466 18:26:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:21.466 18:26:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:21.466 18:26:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:22.875 18:26:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:22.875 18:26:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:22.875 18:26:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:22.875 18:26:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:22.875 18:26:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:22.875 18:26:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:22.875 18:26:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:22.875 18:26:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:22.875 18:26:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:22.875 18:26:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:22.875 18:26:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:22.875 18:26:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:22.875 18:26:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:22.875 18:26:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:22.875 18:26:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:22.875 18:26:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:22.875 18:26:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:22.875 18:26:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:22.875 18:26:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:22.875 18:26:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:22.875 18:26:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:22.875 18:26:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:22.875 18:26:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:22.875 18:26:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:22.875 18:26:22 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:22.875 18:26:22 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:08:22.875 18:26:22 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:22.875 00:08:22.875 real 0m1.615s 00:08:22.875 user 0m1.339s 00:08:22.875 sys 0m0.194s 00:08:22.875 18:26:22 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:22.875 ************************************ 00:08:22.875 END TEST accel_dif_verify 00:08:22.875 ************************************ 00:08:22.875 18:26:22 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:08:22.875 18:26:22 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:08:22.875 18:26:22 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:08:22.875 18:26:22 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:22.875 18:26:22 accel -- common/autotest_common.sh@10 -- # set +x 00:08:22.875 ************************************ 00:08:22.875 START TEST accel_dif_generate 00:08:22.875 ************************************ 00:08:22.875 18:26:22 accel.accel_dif_generate -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate 00:08:22.875 18:26:22 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:08:22.875 18:26:22 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:08:22.875 18:26:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:22.875 18:26:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:22.875 18:26:22 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:08:22.875 18:26:22 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:08:22.875 18:26:22 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:08:22.875 18:26:22 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:22.875 18:26:22 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:22.875 18:26:22 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:22.875 18:26:22 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:22.875 18:26:22 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:22.875 18:26:22 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:08:22.875 18:26:22 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:08:22.875 [2024-07-23 18:26:22.703640] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:08:22.875 [2024-07-23 18:26:22.703835] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76979 ] 00:08:22.875 [2024-07-23 18:26:22.841213] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.875 [2024-07-23 18:26:22.919319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.138 18:26:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:23.138 18:26:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:23.138 18:26:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:23.138 18:26:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:23.138 18:26:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:23.138 18:26:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:23.138 18:26:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:23.138 18:26:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:23.138 18:26:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:08:23.138 18:26:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:23.138 18:26:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:23.138 18:26:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:23.138 18:26:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:23.138 18:26:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:23.138 18:26:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:23.139 18:26:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:23.139 18:26:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:23.139 18:26:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:23.139 18:26:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:23.139 18:26:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:23.139 18:26:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:08:23.139 18:26:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:23.139 18:26:23 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:08:23.139 18:26:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:23.139 18:26:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:23.139 18:26:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:23.139 18:26:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:23.139 18:26:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:23.139 18:26:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:23.139 18:26:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:23.139 18:26:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:23.139 18:26:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:23.139 18:26:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:23.139 18:26:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:08:23.139 18:26:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:23.139 18:26:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:23.139 18:26:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:23.139 18:26:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:08:23.139 18:26:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:23.139 18:26:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:23.139 18:26:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:23.139 18:26:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:23.139 18:26:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:23.139 18:26:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:23.139 18:26:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:23.139 18:26:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:08:23.139 18:26:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:23.139 18:26:23 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:08:23.139 18:26:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:23.139 18:26:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:23.139 18:26:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:08:23.139 18:26:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:23.139 18:26:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:23.139 18:26:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:23.139 18:26:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:08:23.139 18:26:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:23.139 18:26:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:23.139 18:26:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:23.139 18:26:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:08:23.139 18:26:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:23.139 18:26:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:23.139 18:26:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:23.139 18:26:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:08:23.139 18:26:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:23.139 18:26:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:23.139 18:26:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:23.139 18:26:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:08:23.139 18:26:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:23.139 18:26:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:23.139 18:26:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:23.139 18:26:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:23.139 18:26:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:23.139 18:26:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:23.139 18:26:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:23.139 18:26:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:23.139 18:26:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:23.139 18:26:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:23.139 18:26:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:24.533 18:26:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:24.533 18:26:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:24.533 18:26:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:24.533 18:26:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:24.533 18:26:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:24.533 18:26:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:24.533 18:26:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:24.533 18:26:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:24.533 18:26:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:24.533 18:26:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:24.533 18:26:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:24.533 18:26:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:24.533 18:26:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:24.533 18:26:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:24.533 18:26:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:24.533 18:26:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:24.533 18:26:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:24.533 18:26:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:24.533 18:26:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:24.533 18:26:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:24.533 18:26:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:24.533 18:26:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:24.533 18:26:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:24.533 18:26:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:24.533 18:26:24 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:24.533 18:26:24 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:08:24.533 18:26:24 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:24.533 00:08:24.533 real 0m1.602s 00:08:24.533 user 0m1.324s 00:08:24.533 sys 0m0.187s 00:08:24.533 18:26:24 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:24.533 18:26:24 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:08:24.533 ************************************ 00:08:24.533 END TEST accel_dif_generate 00:08:24.533 ************************************ 00:08:24.533 18:26:24 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:08:24.533 18:26:24 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:08:24.533 18:26:24 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:24.533 18:26:24 accel -- common/autotest_common.sh@10 -- # set +x 00:08:24.533 ************************************ 00:08:24.533 START TEST accel_dif_generate_copy 00:08:24.533 ************************************ 00:08:24.533 18:26:24 accel.accel_dif_generate_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate_copy 00:08:24.533 18:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:08:24.533 18:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:08:24.533 18:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:24.533 18:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:24.533 18:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:08:24.533 18:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:08:24.533 18:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:08:24.533 18:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:24.533 18:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:24.533 18:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:24.533 18:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:24.533 18:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:24.533 18:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:08:24.533 18:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:08:24.533 [2024-07-23 18:26:24.361314] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:08:24.533 [2024-07-23 18:26:24.361473] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77015 ] 00:08:24.533 [2024-07-23 18:26:24.506018] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.533 [2024-07-23 18:26:24.578158] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.793 18:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:24.793 18:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:24.793 18:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:24.793 18:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:24.793 18:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:24.793 18:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:24.793 18:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:24.793 18:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:24.793 18:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:08:24.793 18:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:24.793 18:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:24.793 18:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:24.793 18:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:24.793 18:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:24.793 18:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:24.793 18:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:24.793 18:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:24.793 18:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:24.793 18:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:24.793 18:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:24.793 18:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:08:24.793 18:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:24.793 18:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:08:24.793 18:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:24.793 18:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:24.793 18:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:24.793 18:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:24.793 18:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:24.793 18:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:24.793 18:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:24.793 18:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:24.793 18:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:24.793 18:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:24.793 18:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:24.793 18:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:24.793 18:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:24.793 18:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:24.793 18:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:08:24.793 18:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:24.793 18:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:08:24.793 18:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:24.793 18:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:24.793 18:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:08:24.793 18:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:24.793 18:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:24.793 18:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:24.793 18:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:08:24.793 18:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:24.793 18:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:24.793 18:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:24.793 18:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:08:24.793 18:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:24.793 18:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:24.793 18:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:24.793 18:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:08:24.794 18:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:24.794 18:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:24.794 18:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:24.794 18:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:08:24.794 18:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:24.794 18:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:24.794 18:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:24.794 18:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:24.794 18:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:24.794 18:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:24.794 18:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:24.794 18:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:24.794 18:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:24.794 18:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:24.794 18:26:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:26.173 18:26:25 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:26.173 18:26:25 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:26.173 18:26:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:26.173 18:26:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:26.173 18:26:25 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:26.173 18:26:25 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:26.173 18:26:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:26.173 18:26:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:26.173 18:26:25 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:26.173 18:26:25 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:26.173 18:26:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:26.173 18:26:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:26.173 18:26:25 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:26.173 18:26:25 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:26.173 18:26:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:26.173 18:26:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:26.173 18:26:25 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:26.174 18:26:25 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:26.174 18:26:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:26.174 18:26:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:26.174 18:26:25 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:26.174 18:26:25 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:26.174 18:26:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:26.174 18:26:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:26.174 18:26:25 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:26.174 18:26:25 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:08:26.174 ************************************ 00:08:26.174 END TEST accel_dif_generate_copy 00:08:26.174 ************************************ 00:08:26.174 18:26:25 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:26.174 00:08:26.174 real 0m1.602s 00:08:26.174 user 0m1.334s 00:08:26.174 sys 0m0.185s 00:08:26.174 18:26:25 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:26.174 18:26:25 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:08:26.174 18:26:25 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:08:26.174 18:26:25 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:26.174 18:26:25 accel -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:08:26.174 18:26:25 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:26.174 18:26:25 accel -- common/autotest_common.sh@10 -- # set +x 00:08:26.174 ************************************ 00:08:26.174 START TEST accel_comp 00:08:26.174 ************************************ 00:08:26.174 18:26:25 accel.accel_comp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:26.174 18:26:25 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:08:26.174 18:26:25 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:08:26.174 18:26:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:26.174 18:26:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:26.174 18:26:25 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:26.174 18:26:25 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:26.174 18:26:25 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:08:26.174 18:26:25 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:26.174 18:26:25 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:26.174 18:26:25 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:26.174 18:26:25 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:26.174 18:26:25 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:26.174 18:26:25 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:08:26.174 18:26:25 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:08:26.174 [2024-07-23 18:26:26.028203] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:08:26.174 [2024-07-23 18:26:26.028413] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77056 ] 00:08:26.174 [2024-07-23 18:26:26.169304] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.434 [2024-07-23 18:26:26.241392] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.434 18:26:26 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:26.434 18:26:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:26.434 18:26:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:26.434 18:26:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:26.434 18:26:26 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:26.434 18:26:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:26.434 18:26:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:26.434 18:26:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:26.434 18:26:26 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:26.434 18:26:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:26.434 18:26:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:26.434 18:26:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:26.434 18:26:26 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:08:26.434 18:26:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:26.434 18:26:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:26.434 18:26:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:26.434 18:26:26 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:26.434 18:26:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:26.434 18:26:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:26.434 18:26:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:26.434 18:26:26 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:26.434 18:26:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:26.434 18:26:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:26.434 18:26:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:26.434 18:26:26 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:08:26.434 18:26:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:26.434 18:26:26 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:08:26.434 18:26:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:26.434 18:26:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:26.434 18:26:26 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:26.434 18:26:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:26.434 18:26:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:26.434 18:26:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:26.434 18:26:26 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:26.434 18:26:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:26.434 18:26:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:26.434 18:26:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:26.434 18:26:26 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:08:26.434 18:26:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:26.434 18:26:26 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:08:26.434 18:26:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:26.434 18:26:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:26.434 18:26:26 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:26.434 18:26:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:26.434 18:26:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:26.434 18:26:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:26.434 18:26:26 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:08:26.434 18:26:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:26.434 18:26:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:26.434 18:26:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:26.434 18:26:26 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:08:26.434 18:26:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:26.434 18:26:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:26.434 18:26:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:26.434 18:26:26 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:08:26.434 18:26:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:26.434 18:26:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:26.434 18:26:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:26.434 18:26:26 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:08:26.434 18:26:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:26.434 18:26:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:26.434 18:26:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:26.434 18:26:26 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:08:26.434 18:26:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:26.434 18:26:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:26.434 18:26:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:26.434 18:26:26 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:26.434 18:26:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:26.434 18:26:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:26.434 18:26:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:26.434 18:26:26 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:26.434 18:26:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:26.434 18:26:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:26.434 18:26:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:27.815 18:26:27 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:27.815 18:26:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:27.815 18:26:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:27.815 18:26:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:27.815 18:26:27 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:27.815 18:26:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:27.815 18:26:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:27.815 18:26:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:27.815 18:26:27 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:27.815 18:26:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:27.815 18:26:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:27.815 18:26:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:27.815 18:26:27 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:27.815 18:26:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:27.815 18:26:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:27.815 18:26:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:27.815 18:26:27 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:27.815 18:26:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:27.815 18:26:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:27.815 18:26:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:27.815 18:26:27 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:27.815 18:26:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:27.815 18:26:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:27.815 18:26:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:27.815 18:26:27 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:27.815 18:26:27 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:08:27.815 18:26:27 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:27.815 00:08:27.815 real 0m1.605s 00:08:27.815 user 0m1.331s 00:08:27.815 sys 0m0.191s 00:08:27.815 18:26:27 accel.accel_comp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:27.815 18:26:27 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:08:27.815 ************************************ 00:08:27.815 END TEST accel_comp 00:08:27.815 ************************************ 00:08:27.815 18:26:27 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:27.815 18:26:27 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:08:27.815 18:26:27 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:27.815 18:26:27 accel -- common/autotest_common.sh@10 -- # set +x 00:08:27.815 ************************************ 00:08:27.815 START TEST accel_decomp 00:08:27.815 ************************************ 00:08:27.815 18:26:27 accel.accel_decomp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:27.815 18:26:27 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:08:27.815 18:26:27 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:08:27.815 18:26:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:27.815 18:26:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:27.815 18:26:27 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:27.815 18:26:27 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:27.815 18:26:27 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:08:27.815 18:26:27 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:27.816 18:26:27 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:27.816 18:26:27 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:27.816 18:26:27 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:27.816 18:26:27 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:27.816 18:26:27 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:08:27.816 18:26:27 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:08:27.816 [2024-07-23 18:26:27.697297] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:08:27.816 [2024-07-23 18:26:27.697500] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77086 ] 00:08:27.816 [2024-07-23 18:26:27.838613] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.075 [2024-07-23 18:26:27.914764] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.075 18:26:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:28.076 18:26:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:28.076 18:26:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:28.076 18:26:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:28.076 18:26:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:28.076 18:26:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:28.076 18:26:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:28.076 18:26:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:28.076 18:26:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:28.076 18:26:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:28.076 18:26:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:28.076 18:26:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:28.076 18:26:27 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:08:28.076 18:26:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:28.076 18:26:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:28.076 18:26:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:28.076 18:26:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:28.076 18:26:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:28.076 18:26:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:28.076 18:26:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:28.076 18:26:28 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:28.076 18:26:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:28.076 18:26:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:28.076 18:26:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:28.076 18:26:28 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:08:28.076 18:26:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:28.076 18:26:28 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:28.076 18:26:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:28.076 18:26:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:28.076 18:26:28 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:28.076 18:26:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:28.076 18:26:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:28.076 18:26:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:28.076 18:26:28 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:28.076 18:26:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:28.076 18:26:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:28.076 18:26:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:28.076 18:26:28 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:08:28.076 18:26:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:28.076 18:26:28 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:08:28.076 18:26:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:28.076 18:26:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:28.076 18:26:28 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:28.076 18:26:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:28.076 18:26:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:28.076 18:26:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:28.076 18:26:28 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:08:28.076 18:26:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:28.076 18:26:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:28.076 18:26:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:28.076 18:26:28 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:08:28.076 18:26:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:28.076 18:26:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:28.076 18:26:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:28.076 18:26:28 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:08:28.076 18:26:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:28.076 18:26:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:28.076 18:26:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:28.076 18:26:28 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:08:28.076 18:26:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:28.076 18:26:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:28.076 18:26:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:28.076 18:26:28 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:08:28.076 18:26:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:28.076 18:26:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:28.076 18:26:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:28.076 18:26:28 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:28.076 18:26:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:28.076 18:26:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:28.076 18:26:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:28.076 18:26:28 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:28.076 18:26:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:28.076 18:26:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:28.076 18:26:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:29.457 18:26:29 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:29.457 18:26:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:29.457 18:26:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:29.457 18:26:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:29.457 18:26:29 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:29.457 18:26:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:29.457 18:26:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:29.457 18:26:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:29.457 18:26:29 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:29.457 18:26:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:29.457 18:26:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:29.457 18:26:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:29.457 18:26:29 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:29.457 18:26:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:29.457 18:26:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:29.457 18:26:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:29.457 18:26:29 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:29.457 18:26:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:29.457 18:26:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:29.457 18:26:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:29.457 18:26:29 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:29.457 18:26:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:29.457 18:26:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:29.457 18:26:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:29.457 18:26:29 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:29.457 18:26:29 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:29.457 18:26:29 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:29.457 00:08:29.457 real 0m1.604s 00:08:29.457 user 0m1.323s 00:08:29.457 sys 0m0.194s 00:08:29.457 18:26:29 accel.accel_decomp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:29.457 18:26:29 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:08:29.457 ************************************ 00:08:29.457 END TEST accel_decomp 00:08:29.457 ************************************ 00:08:29.457 18:26:29 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:29.457 18:26:29 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:08:29.457 18:26:29 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:29.457 18:26:29 accel -- common/autotest_common.sh@10 -- # set +x 00:08:29.457 ************************************ 00:08:29.457 START TEST accel_decmop_full 00:08:29.457 ************************************ 00:08:29.457 18:26:29 accel.accel_decmop_full -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:29.457 18:26:29 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:08:29.457 18:26:29 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:08:29.457 18:26:29 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:29.457 18:26:29 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:29.457 18:26:29 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:29.457 18:26:29 accel.accel_decmop_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:29.457 18:26:29 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:08:29.457 18:26:29 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:29.457 18:26:29 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:29.457 18:26:29 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:29.457 18:26:29 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:29.457 18:26:29 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:29.457 18:26:29 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:08:29.457 18:26:29 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:08:29.457 [2024-07-23 18:26:29.363619] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:08:29.457 [2024-07-23 18:26:29.363749] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77127 ] 00:08:29.458 [2024-07-23 18:26:29.499741] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.717 [2024-07-23 18:26:29.576878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.717 18:26:29 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:29.717 18:26:29 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:29.717 18:26:29 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:29.717 18:26:29 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:29.717 18:26:29 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:29.717 18:26:29 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:29.717 18:26:29 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:29.717 18:26:29 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:29.717 18:26:29 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:29.717 18:26:29 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:29.717 18:26:29 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:29.717 18:26:29 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:29.717 18:26:29 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:08:29.717 18:26:29 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:29.717 18:26:29 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:29.717 18:26:29 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:29.717 18:26:29 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:29.717 18:26:29 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:29.717 18:26:29 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:29.717 18:26:29 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:29.717 18:26:29 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:29.717 18:26:29 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:29.717 18:26:29 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:29.717 18:26:29 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:29.717 18:26:29 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:08:29.717 18:26:29 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:29.717 18:26:29 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:29.717 18:26:29 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:29.717 18:26:29 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:29.717 18:26:29 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:29.717 18:26:29 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:29.717 18:26:29 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:29.717 18:26:29 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:29.717 18:26:29 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:29.717 18:26:29 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:29.717 18:26:29 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:29.717 18:26:29 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:29.717 18:26:29 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=software 00:08:29.717 18:26:29 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:29.717 18:26:29 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=software 00:08:29.717 18:26:29 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:29.717 18:26:29 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:29.717 18:26:29 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:29.717 18:26:29 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:29.717 18:26:29 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:29.717 18:26:29 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:29.717 18:26:29 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:08:29.717 18:26:29 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:29.717 18:26:29 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:29.717 18:26:29 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:29.717 18:26:29 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:08:29.717 18:26:29 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:29.717 18:26:29 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:29.717 18:26:29 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:29.717 18:26:29 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:08:29.717 18:26:29 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:29.717 18:26:29 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:29.717 18:26:29 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:29.717 18:26:29 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:08:29.717 18:26:29 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:29.717 18:26:29 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:29.717 18:26:29 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:29.717 18:26:29 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:08:29.717 18:26:29 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:29.717 18:26:29 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:29.717 18:26:29 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:29.717 18:26:29 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:29.717 18:26:29 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:29.717 18:26:29 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:29.717 18:26:29 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:29.717 18:26:29 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:29.717 18:26:29 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:29.717 18:26:29 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:29.717 18:26:29 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:31.097 18:26:30 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:31.097 18:26:30 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:31.097 18:26:30 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:31.097 18:26:30 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:31.097 18:26:30 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:31.097 18:26:30 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:31.097 18:26:30 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:31.097 18:26:30 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:31.097 18:26:30 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:31.097 18:26:30 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:31.097 18:26:30 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:31.097 18:26:30 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:31.097 18:26:30 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:31.097 18:26:30 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:31.097 18:26:30 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:31.097 18:26:30 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:31.097 18:26:30 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:31.097 18:26:30 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:31.097 18:26:30 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:31.097 18:26:30 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:31.097 18:26:30 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:31.097 18:26:30 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:31.097 18:26:30 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:31.097 18:26:30 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:31.097 18:26:30 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:31.097 18:26:30 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:31.097 18:26:30 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:31.097 00:08:31.097 real 0m1.614s 00:08:31.097 user 0m0.025s 00:08:31.097 sys 0m0.001s 00:08:31.097 18:26:30 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:31.097 18:26:30 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:08:31.097 ************************************ 00:08:31.097 END TEST accel_decmop_full 00:08:31.097 ************************************ 00:08:31.097 18:26:30 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:31.097 18:26:30 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:08:31.097 18:26:30 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:31.097 18:26:30 accel -- common/autotest_common.sh@10 -- # set +x 00:08:31.097 ************************************ 00:08:31.097 START TEST accel_decomp_mcore 00:08:31.097 ************************************ 00:08:31.097 18:26:30 accel.accel_decomp_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:31.097 18:26:30 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:08:31.097 18:26:30 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:08:31.097 18:26:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.097 18:26:30 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:31.097 18:26:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.097 18:26:30 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:31.097 18:26:30 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:08:31.097 18:26:30 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:31.098 18:26:30 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:31.098 18:26:30 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:31.098 18:26:30 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:31.098 18:26:30 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:31.098 18:26:30 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:08:31.098 18:26:30 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:08:31.098 [2024-07-23 18:26:31.037781] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:08:31.098 [2024-07-23 18:26:31.037926] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77163 ] 00:08:31.357 [2024-07-23 18:26:31.179531] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:31.357 [2024-07-23 18:26:31.256784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:31.357 [2024-07-23 18:26:31.257019] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:31.357 [2024-07-23 18:26:31.257210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:31.357 [2024-07-23 18:26:31.257266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.357 18:26:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:31.357 18:26:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.357 18:26:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.357 18:26:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.357 18:26:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:31.357 18:26:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.357 18:26:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.357 18:26:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.357 18:26:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:31.357 18:26:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.357 18:26:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.357 18:26:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.357 18:26:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:08:31.357 18:26:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.357 18:26:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.357 18:26:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.357 18:26:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:31.358 18:26:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.358 18:26:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.358 18:26:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.358 18:26:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:31.358 18:26:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.358 18:26:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.358 18:26:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.358 18:26:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:08:31.358 18:26:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.358 18:26:31 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:31.358 18:26:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.358 18:26:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.358 18:26:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:31.358 18:26:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.358 18:26:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.358 18:26:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.358 18:26:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:31.358 18:26:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.358 18:26:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.358 18:26:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.358 18:26:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:08:31.358 18:26:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.358 18:26:31 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:08:31.358 18:26:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.358 18:26:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.358 18:26:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:31.358 18:26:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.358 18:26:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.358 18:26:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.358 18:26:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:31.358 18:26:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.358 18:26:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.358 18:26:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.358 18:26:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:31.358 18:26:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.358 18:26:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.358 18:26:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.358 18:26:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:08:31.358 18:26:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.358 18:26:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.358 18:26:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.358 18:26:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:08:31.358 18:26:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.358 18:26:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.358 18:26:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.358 18:26:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:08:31.358 18:26:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.358 18:26:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.358 18:26:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.358 18:26:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:31.358 18:26:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.358 18:26:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.358 18:26:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.358 18:26:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:31.358 18:26:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.358 18:26:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.358 18:26:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:32.739 18:26:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:32.739 18:26:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:32.739 18:26:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:32.739 18:26:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:32.739 18:26:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:32.739 18:26:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:32.739 18:26:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:32.739 18:26:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:32.739 18:26:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:32.739 18:26:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:32.739 18:26:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:32.739 18:26:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:32.739 18:26:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:32.739 18:26:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:32.739 18:26:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:32.739 18:26:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:32.739 18:26:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:32.739 18:26:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:32.739 18:26:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:32.739 18:26:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:32.739 18:26:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:32.739 18:26:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:32.739 18:26:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:32.739 18:26:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:32.739 18:26:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:32.739 18:26:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:32.739 18:26:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:32.739 18:26:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:32.739 18:26:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:32.739 18:26:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:32.739 18:26:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:32.739 18:26:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:32.739 18:26:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:32.739 18:26:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:32.739 18:26:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:32.739 18:26:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:32.739 18:26:32 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:32.739 18:26:32 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:32.739 18:26:32 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:32.739 00:08:32.739 real 0m1.627s 00:08:32.739 user 0m0.016s 00:08:32.739 sys 0m0.003s 00:08:32.739 18:26:32 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:32.739 18:26:32 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:08:32.739 ************************************ 00:08:32.739 END TEST accel_decomp_mcore 00:08:32.739 ************************************ 00:08:32.739 18:26:32 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:32.739 18:26:32 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:08:32.739 18:26:32 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:32.740 18:26:32 accel -- common/autotest_common.sh@10 -- # set +x 00:08:32.740 ************************************ 00:08:32.740 START TEST accel_decomp_full_mcore 00:08:32.740 ************************************ 00:08:32.740 18:26:32 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:32.740 18:26:32 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:08:32.740 18:26:32 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:08:32.740 18:26:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:32.740 18:26:32 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:32.740 18:26:32 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:32.740 18:26:32 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:32.740 18:26:32 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:08:32.740 18:26:32 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:32.740 18:26:32 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:32.740 18:26:32 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:32.740 18:26:32 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:32.740 18:26:32 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:32.740 18:26:32 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:08:32.740 18:26:32 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:08:32.740 [2024-07-23 18:26:32.742517] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:08:32.740 [2024-07-23 18:26:32.742721] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77201 ] 00:08:32.999 [2024-07-23 18:26:32.893380] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:32.999 [2024-07-23 18:26:32.978254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:32.999 [2024-07-23 18:26:32.978383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:32.999 [2024-07-23 18:26:32.978523] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:32.999 [2024-07-23 18:26:32.978427] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.260 18:26:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:33.260 18:26:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:33.260 18:26:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:33.260 18:26:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:33.260 18:26:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:33.260 18:26:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:33.260 18:26:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:33.260 18:26:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:33.260 18:26:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:33.260 18:26:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:33.260 18:26:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:33.260 18:26:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:33.260 18:26:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:08:33.260 18:26:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:33.260 18:26:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:33.260 18:26:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:33.260 18:26:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:33.260 18:26:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:33.260 18:26:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:33.260 18:26:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:33.260 18:26:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:33.260 18:26:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:33.260 18:26:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:33.260 18:26:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:33.260 18:26:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:08:33.260 18:26:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:33.260 18:26:33 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:33.260 18:26:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:33.260 18:26:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:33.260 18:26:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:33.260 18:26:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:33.260 18:26:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:33.260 18:26:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:33.260 18:26:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:33.260 18:26:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:33.260 18:26:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:33.260 18:26:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:33.260 18:26:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:08:33.260 18:26:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:33.260 18:26:33 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:08:33.260 18:26:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:33.260 18:26:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:33.260 18:26:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:33.260 18:26:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:33.260 18:26:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:33.260 18:26:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:33.260 18:26:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:08:33.260 18:26:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:33.260 18:26:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:33.260 18:26:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:33.260 18:26:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:08:33.260 18:26:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:33.260 18:26:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:33.260 18:26:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:33.261 18:26:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:08:33.261 18:26:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:33.261 18:26:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:33.261 18:26:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:33.261 18:26:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:08:33.261 18:26:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:33.261 18:26:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:33.261 18:26:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:33.261 18:26:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:08:33.261 18:26:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:33.261 18:26:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:33.261 18:26:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:33.261 18:26:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:33.261 18:26:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:33.261 18:26:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:33.261 18:26:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:33.261 18:26:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:33.261 18:26:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:33.261 18:26:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:33.261 18:26:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:34.659 18:26:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:34.659 18:26:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:34.659 18:26:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:34.659 18:26:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:34.659 18:26:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:34.659 18:26:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:34.659 18:26:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:34.659 18:26:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:34.659 18:26:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:34.659 18:26:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:34.659 18:26:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:34.659 18:26:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:34.659 18:26:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:34.659 18:26:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:34.659 18:26:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:34.659 18:26:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:34.659 18:26:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:34.659 18:26:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:34.659 18:26:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:34.659 18:26:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:34.659 18:26:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:34.659 18:26:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:34.659 18:26:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:34.659 18:26:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:34.659 18:26:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:34.659 18:26:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:34.659 18:26:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:34.659 18:26:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:34.659 18:26:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:34.659 18:26:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:34.659 18:26:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:34.659 18:26:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:34.659 18:26:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:34.659 18:26:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:34.659 18:26:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:34.659 18:26:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:34.659 18:26:34 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:34.659 18:26:34 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:34.659 18:26:34 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:34.659 00:08:34.659 real 0m1.666s 00:08:34.659 user 0m0.017s 00:08:34.659 sys 0m0.004s 00:08:34.659 18:26:34 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:34.659 ************************************ 00:08:34.659 END TEST accel_decomp_full_mcore 00:08:34.659 ************************************ 00:08:34.659 18:26:34 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:08:34.659 18:26:34 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:34.659 18:26:34 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:08:34.659 18:26:34 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:34.659 18:26:34 accel -- common/autotest_common.sh@10 -- # set +x 00:08:34.659 ************************************ 00:08:34.659 START TEST accel_decomp_mthread 00:08:34.659 ************************************ 00:08:34.659 18:26:34 accel.accel_decomp_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:34.659 18:26:34 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:08:34.659 18:26:34 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:08:34.659 18:26:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.659 18:26:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.659 18:26:34 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:34.659 18:26:34 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:34.659 18:26:34 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:08:34.659 18:26:34 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:34.659 18:26:34 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:34.659 18:26:34 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:34.659 18:26:34 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:34.659 18:26:34 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:34.659 18:26:34 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:08:34.659 18:26:34 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:08:34.659 [2024-07-23 18:26:34.457647] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:08:34.659 [2024-07-23 18:26:34.457826] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77245 ] 00:08:34.659 [2024-07-23 18:26:34.606194] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.659 [2024-07-23 18:26:34.679402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.919 18:26:34 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:34.919 18:26:34 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.919 18:26:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.919 18:26:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.919 18:26:34 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:34.919 18:26:34 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.919 18:26:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.919 18:26:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.919 18:26:34 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:34.919 18:26:34 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.919 18:26:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.919 18:26:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.919 18:26:34 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:08:34.919 18:26:34 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.919 18:26:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.919 18:26:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.919 18:26:34 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:34.919 18:26:34 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.919 18:26:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.919 18:26:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.919 18:26:34 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:34.919 18:26:34 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.919 18:26:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.919 18:26:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.919 18:26:34 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:08:34.919 18:26:34 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.919 18:26:34 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:34.919 18:26:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.919 18:26:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.919 18:26:34 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:34.919 18:26:34 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.919 18:26:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.919 18:26:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.919 18:26:34 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:34.919 18:26:34 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.919 18:26:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.919 18:26:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.919 18:26:34 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:08:34.919 18:26:34 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.919 18:26:34 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:08:34.919 18:26:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.919 18:26:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.919 18:26:34 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:34.919 18:26:34 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.919 18:26:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.919 18:26:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.919 18:26:34 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:08:34.919 18:26:34 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.919 18:26:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.919 18:26:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.919 18:26:34 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:08:34.919 18:26:34 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.920 18:26:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.920 18:26:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.920 18:26:34 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:08:34.920 18:26:34 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.920 18:26:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.920 18:26:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.920 18:26:34 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:08:34.920 18:26:34 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.920 18:26:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.920 18:26:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.920 18:26:34 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:08:34.920 18:26:34 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.920 18:26:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.920 18:26:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.920 18:26:34 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:34.920 18:26:34 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.920 18:26:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.920 18:26:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.920 18:26:34 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:34.920 18:26:34 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.920 18:26:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.920 18:26:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.311 18:26:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:36.311 18:26:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.311 18:26:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.311 18:26:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.311 18:26:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:36.311 18:26:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.311 18:26:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.311 18:26:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.311 18:26:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:36.311 18:26:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.311 18:26:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.311 18:26:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.311 18:26:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:36.311 18:26:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.311 18:26:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.311 18:26:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.311 18:26:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:36.311 18:26:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.311 18:26:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.311 18:26:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.311 18:26:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:36.311 18:26:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.311 18:26:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.311 18:26:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.311 18:26:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:36.311 18:26:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.311 18:26:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.311 18:26:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.311 18:26:36 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:36.311 18:26:36 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:36.311 18:26:36 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:36.311 00:08:36.311 real 0m1.600s 00:08:36.311 user 0m1.318s 00:08:36.311 sys 0m0.200s 00:08:36.311 18:26:36 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:36.311 18:26:36 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:08:36.311 ************************************ 00:08:36.311 END TEST accel_decomp_mthread 00:08:36.311 ************************************ 00:08:36.311 18:26:36 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:36.311 18:26:36 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:08:36.311 18:26:36 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:36.312 18:26:36 accel -- common/autotest_common.sh@10 -- # set +x 00:08:36.312 ************************************ 00:08:36.312 START TEST accel_decomp_full_mthread 00:08:36.312 ************************************ 00:08:36.312 18:26:36 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:36.312 18:26:36 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:08:36.312 18:26:36 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:08:36.312 18:26:36 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:36.312 18:26:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.312 18:26:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.312 18:26:36 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:36.312 18:26:36 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:08:36.312 18:26:36 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:36.312 18:26:36 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:36.312 18:26:36 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:36.312 18:26:36 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:36.312 18:26:36 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:36.312 18:26:36 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:08:36.312 18:26:36 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:08:36.312 [2024-07-23 18:26:36.104892] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:08:36.312 [2024-07-23 18:26:36.105118] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77281 ] 00:08:36.312 [2024-07-23 18:26:36.264900] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.312 [2024-07-23 18:26:36.318560] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.571 18:26:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:36.571 18:26:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.571 18:26:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.571 18:26:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.571 18:26:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:36.571 18:26:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.571 18:26:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.571 18:26:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.571 18:26:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:36.571 18:26:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.571 18:26:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.571 18:26:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.571 18:26:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:08:36.571 18:26:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.571 18:26:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.571 18:26:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.571 18:26:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:36.571 18:26:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.571 18:26:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.571 18:26:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.571 18:26:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:36.571 18:26:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.571 18:26:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.571 18:26:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.571 18:26:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:08:36.571 18:26:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.571 18:26:36 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:36.571 18:26:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.571 18:26:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.571 18:26:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:36.571 18:26:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.571 18:26:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.571 18:26:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.571 18:26:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:36.571 18:26:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.571 18:26:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.571 18:26:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.571 18:26:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:08:36.571 18:26:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.571 18:26:36 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:08:36.571 18:26:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.571 18:26:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.571 18:26:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:36.571 18:26:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.571 18:26:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.571 18:26:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.571 18:26:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:08:36.571 18:26:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.571 18:26:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.571 18:26:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.571 18:26:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:08:36.571 18:26:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.571 18:26:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.571 18:26:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.571 18:26:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:08:36.571 18:26:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.571 18:26:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.571 18:26:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.571 18:26:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:08:36.571 18:26:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.571 18:26:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.571 18:26:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.571 18:26:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:08:36.571 18:26:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.571 18:26:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.571 18:26:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.571 18:26:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:36.571 18:26:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.571 18:26:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.571 18:26:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:36.571 18:26:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:36.571 18:26:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:36.571 18:26:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:36.571 18:26:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:37.508 18:26:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:37.508 18:26:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:37.508 18:26:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:37.509 18:26:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:37.509 18:26:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:37.509 18:26:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:37.509 18:26:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:37.509 18:26:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:37.509 18:26:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:37.509 18:26:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:37.509 18:26:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:37.509 18:26:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:37.509 18:26:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:37.509 18:26:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:37.509 18:26:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:37.509 18:26:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:37.509 18:26:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:37.509 18:26:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:37.509 18:26:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:37.509 18:26:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:37.509 18:26:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:37.509 18:26:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:37.509 18:26:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:37.509 18:26:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:37.509 18:26:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:37.509 18:26:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:37.509 18:26:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:37.509 18:26:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:37.509 ************************************ 00:08:37.509 END TEST accel_decomp_full_mthread 00:08:37.509 ************************************ 00:08:37.509 18:26:37 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:37.509 18:26:37 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:37.509 18:26:37 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:37.509 00:08:37.509 real 0m1.492s 00:08:37.509 user 0m0.016s 00:08:37.509 sys 0m0.005s 00:08:37.509 18:26:37 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:37.509 18:26:37 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:08:37.768 18:26:37 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:08:37.768 18:26:37 accel -- accel/accel.sh@137 -- # build_accel_config 00:08:37.768 18:26:37 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:37.768 18:26:37 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:37.768 18:26:37 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:37.768 18:26:37 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:37.769 18:26:37 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:37.769 18:26:37 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:37.769 18:26:37 accel -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:37.769 18:26:37 accel -- accel/accel.sh@40 -- # local IFS=, 00:08:37.769 18:26:37 accel -- accel/accel.sh@41 -- # jq -r . 00:08:37.769 18:26:37 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:37.769 18:26:37 accel -- common/autotest_common.sh@10 -- # set +x 00:08:37.769 ************************************ 00:08:37.769 START TEST accel_dif_functional_tests 00:08:37.769 ************************************ 00:08:37.769 18:26:37 accel.accel_dif_functional_tests -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:37.769 [2024-07-23 18:26:37.687629] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:08:37.769 [2024-07-23 18:26:37.687761] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77317 ] 00:08:38.028 [2024-07-23 18:26:37.835556] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:38.028 [2024-07-23 18:26:37.882603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.028 [2024-07-23 18:26:37.882616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:38.028 [2024-07-23 18:26:37.882639] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:38.028 00:08:38.028 00:08:38.028 CUnit - A unit testing framework for C - Version 2.1-3 00:08:38.028 http://cunit.sourceforge.net/ 00:08:38.028 00:08:38.028 00:08:38.028 Suite: accel_dif 00:08:38.028 Test: verify: DIF generated, GUARD check ...passed 00:08:38.028 Test: verify: DIF generated, APPTAG check ...passed 00:08:38.028 Test: verify: DIF generated, REFTAG check ...passed 00:08:38.028 Test: verify: DIF not generated, GUARD check ...passed 00:08:38.028 Test: verify: DIF not generated, APPTAG check ...[2024-07-23 18:26:37.953446] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:38.028 passed 00:08:38.028 Test: verify: DIF not generated, REFTAG check ...[2024-07-23 18:26:37.953563] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:38.028 passed 00:08:38.028 Test: verify: APPTAG correct, APPTAG check ...[2024-07-23 18:26:37.953627] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:38.028 passed 00:08:38.028 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-23 18:26:37.953756] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:08:38.028 passed 00:08:38.028 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:08:38.028 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:08:38.028 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:08:38.028 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-23 18:26:37.954054] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:08:38.028 passed 00:08:38.028 Test: verify copy: DIF generated, GUARD check ...passed 00:08:38.028 Test: verify copy: DIF generated, APPTAG check ...passed 00:08:38.028 Test: verify copy: DIF generated, REFTAG check ...passed 00:08:38.028 Test: verify copy: DIF not generated, GUARD check ...passed 00:08:38.028 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-23 18:26:37.954327] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:38.028 [2024-07-23 18:26:37.954396] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:38.028 passed 00:08:38.028 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-23 18:26:37.954448] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:38.028 passed 00:08:38.028 Test: generate copy: DIF generated, GUARD check ...passed 00:08:38.028 Test: generate copy: DIF generated, APTTAG check ...passed 00:08:38.028 Test: generate copy: DIF generated, REFTAG check ...passed 00:08:38.028 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:08:38.028 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:08:38.028 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:08:38.028 Test: generate copy: iovecs-len validate ...passed 00:08:38.028 Test: generate copy: buffer alignment validate ...[2024-07-23 18:26:37.955022] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:08:38.028 passed 00:08:38.028 00:08:38.028 Run Summary: Type Total Ran Passed Failed Inactive 00:08:38.028 suites 1 1 n/a 0 0 00:08:38.028 tests 26 26 26 0 0 00:08:38.028 asserts 115 115 115 0 n/a 00:08:38.028 00:08:38.028 Elapsed time = 0.005 seconds 00:08:38.287 00:08:38.287 real 0m0.570s 00:08:38.287 user 0m0.650s 00:08:38.287 sys 0m0.189s 00:08:38.287 ************************************ 00:08:38.287 END TEST accel_dif_functional_tests 00:08:38.287 ************************************ 00:08:38.287 18:26:38 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:38.287 18:26:38 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:08:38.287 00:08:38.287 real 0m37.318s 00:08:38.287 user 0m37.361s 00:08:38.287 sys 0m6.150s 00:08:38.287 18:26:38 accel -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:38.287 ************************************ 00:08:38.287 END TEST accel 00:08:38.287 ************************************ 00:08:38.287 18:26:38 accel -- common/autotest_common.sh@10 -- # set +x 00:08:38.287 18:26:38 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:08:38.287 18:26:38 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:38.287 18:26:38 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:38.287 18:26:38 -- common/autotest_common.sh@10 -- # set +x 00:08:38.287 ************************************ 00:08:38.287 START TEST accel_rpc 00:08:38.287 ************************************ 00:08:38.287 18:26:38 accel_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:08:38.546 * Looking for test storage... 00:08:38.546 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:08:38.546 18:26:38 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:38.546 18:26:38 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=77383 00:08:38.546 18:26:38 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:08:38.546 18:26:38 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 77383 00:08:38.546 18:26:38 accel_rpc -- common/autotest_common.sh@827 -- # '[' -z 77383 ']' 00:08:38.546 18:26:38 accel_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.546 18:26:38 accel_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:38.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.546 18:26:38 accel_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.546 18:26:38 accel_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:38.546 18:26:38 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:38.546 [2024-07-23 18:26:38.496244] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:08:38.546 [2024-07-23 18:26:38.496379] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77383 ] 00:08:38.804 [2024-07-23 18:26:38.638447] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.804 [2024-07-23 18:26:38.696667] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.371 18:26:39 accel_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:39.371 18:26:39 accel_rpc -- common/autotest_common.sh@860 -- # return 0 00:08:39.371 18:26:39 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:08:39.371 18:26:39 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:08:39.371 18:26:39 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:08:39.371 18:26:39 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:08:39.371 18:26:39 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:08:39.371 18:26:39 accel_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:39.371 18:26:39 accel_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:39.371 18:26:39 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:39.371 ************************************ 00:08:39.371 START TEST accel_assign_opcode 00:08:39.371 ************************************ 00:08:39.371 18:26:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1121 -- # accel_assign_opcode_test_suite 00:08:39.371 18:26:39 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:08:39.371 18:26:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.371 18:26:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:39.371 [2024-07-23 18:26:39.348259] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:08:39.371 18:26:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.371 18:26:39 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:08:39.371 18:26:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.371 18:26:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:39.371 [2024-07-23 18:26:39.360211] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:08:39.371 18:26:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.371 18:26:39 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:08:39.371 18:26:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.371 18:26:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:39.630 18:26:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.630 18:26:39 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:08:39.630 18:26:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.630 18:26:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:39.630 18:26:39 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:08:39.630 18:26:39 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:08:39.630 18:26:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.630 software 00:08:39.630 00:08:39.630 real 0m0.253s 00:08:39.630 user 0m0.044s 00:08:39.630 ************************************ 00:08:39.630 END TEST accel_assign_opcode 00:08:39.630 ************************************ 00:08:39.630 sys 0m0.017s 00:08:39.630 18:26:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:39.630 18:26:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:39.630 18:26:39 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 77383 00:08:39.630 18:26:39 accel_rpc -- common/autotest_common.sh@946 -- # '[' -z 77383 ']' 00:08:39.630 18:26:39 accel_rpc -- common/autotest_common.sh@950 -- # kill -0 77383 00:08:39.630 18:26:39 accel_rpc -- common/autotest_common.sh@951 -- # uname 00:08:39.630 18:26:39 accel_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:39.630 18:26:39 accel_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 77383 00:08:39.630 killing process with pid 77383 00:08:39.630 18:26:39 accel_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:39.630 18:26:39 accel_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:39.630 18:26:39 accel_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 77383' 00:08:39.630 18:26:39 accel_rpc -- common/autotest_common.sh@965 -- # kill 77383 00:08:39.630 18:26:39 accel_rpc -- common/autotest_common.sh@970 -- # wait 77383 00:08:40.199 00:08:40.199 real 0m1.766s 00:08:40.199 user 0m1.734s 00:08:40.199 sys 0m0.496s 00:08:40.199 ************************************ 00:08:40.199 END TEST accel_rpc 00:08:40.199 ************************************ 00:08:40.199 18:26:40 accel_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:40.199 18:26:40 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:40.199 18:26:40 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:40.199 18:26:40 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:40.199 18:26:40 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:40.199 18:26:40 -- common/autotest_common.sh@10 -- # set +x 00:08:40.199 ************************************ 00:08:40.199 START TEST app_cmdline 00:08:40.199 ************************************ 00:08:40.199 18:26:40 app_cmdline -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:40.199 * Looking for test storage... 00:08:40.199 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:40.199 18:26:40 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:40.199 18:26:40 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:40.199 18:26:40 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=77477 00:08:40.199 18:26:40 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 77477 00:08:40.199 18:26:40 app_cmdline -- common/autotest_common.sh@827 -- # '[' -z 77477 ']' 00:08:40.199 18:26:40 app_cmdline -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:40.199 18:26:40 app_cmdline -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:40.199 18:26:40 app_cmdline -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:40.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:40.199 18:26:40 app_cmdline -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:40.199 18:26:40 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:40.458 [2024-07-23 18:26:40.342765] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:08:40.458 [2024-07-23 18:26:40.343041] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77477 ] 00:08:40.458 [2024-07-23 18:26:40.499297] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.716 [2024-07-23 18:26:40.546702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.284 18:26:41 app_cmdline -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:41.284 18:26:41 app_cmdline -- common/autotest_common.sh@860 -- # return 0 00:08:41.284 18:26:41 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:08:41.543 { 00:08:41.543 "version": "SPDK v24.05.1-pre git sha1 241d0f3c9", 00:08:41.543 "fields": { 00:08:41.543 "major": 24, 00:08:41.543 "minor": 5, 00:08:41.543 "patch": 1, 00:08:41.543 "suffix": "-pre", 00:08:41.543 "commit": "241d0f3c9" 00:08:41.543 } 00:08:41.543 } 00:08:41.543 18:26:41 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:41.543 18:26:41 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:41.543 18:26:41 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:41.543 18:26:41 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:41.544 18:26:41 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:41.544 18:26:41 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:41.544 18:26:41 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.544 18:26:41 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:41.544 18:26:41 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:41.544 18:26:41 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.544 18:26:41 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:41.544 18:26:41 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:41.544 18:26:41 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:41.544 18:26:41 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:08:41.544 18:26:41 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:41.544 18:26:41 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:41.544 18:26:41 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:41.544 18:26:41 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:41.544 18:26:41 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:41.544 18:26:41 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:41.544 18:26:41 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:41.544 18:26:41 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:41.544 18:26:41 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:41.544 18:26:41 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:41.807 request: 00:08:41.807 { 00:08:41.807 "method": "env_dpdk_get_mem_stats", 00:08:41.807 "req_id": 1 00:08:41.807 } 00:08:41.807 Got JSON-RPC error response 00:08:41.807 response: 00:08:41.807 { 00:08:41.807 "code": -32601, 00:08:41.807 "message": "Method not found" 00:08:41.807 } 00:08:41.807 18:26:41 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:08:41.807 18:26:41 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:41.807 18:26:41 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:41.807 18:26:41 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:41.807 18:26:41 app_cmdline -- app/cmdline.sh@1 -- # killprocess 77477 00:08:41.807 18:26:41 app_cmdline -- common/autotest_common.sh@946 -- # '[' -z 77477 ']' 00:08:41.807 18:26:41 app_cmdline -- common/autotest_common.sh@950 -- # kill -0 77477 00:08:41.807 18:26:41 app_cmdline -- common/autotest_common.sh@951 -- # uname 00:08:41.807 18:26:41 app_cmdline -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:41.807 18:26:41 app_cmdline -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 77477 00:08:41.807 killing process with pid 77477 00:08:41.807 18:26:41 app_cmdline -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:41.807 18:26:41 app_cmdline -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:41.807 18:26:41 app_cmdline -- common/autotest_common.sh@964 -- # echo 'killing process with pid 77477' 00:08:41.807 18:26:41 app_cmdline -- common/autotest_common.sh@965 -- # kill 77477 00:08:41.807 18:26:41 app_cmdline -- common/autotest_common.sh@970 -- # wait 77477 00:08:42.071 00:08:42.071 real 0m1.938s 00:08:42.071 user 0m2.227s 00:08:42.071 sys 0m0.527s 00:08:42.071 18:26:42 app_cmdline -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:42.071 18:26:42 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:42.071 ************************************ 00:08:42.071 END TEST app_cmdline 00:08:42.071 ************************************ 00:08:42.071 18:26:42 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:42.071 18:26:42 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:42.071 18:26:42 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:42.071 18:26:42 -- common/autotest_common.sh@10 -- # set +x 00:08:42.071 ************************************ 00:08:42.071 START TEST version 00:08:42.071 ************************************ 00:08:42.071 18:26:42 version -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:42.331 * Looking for test storage... 00:08:42.331 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:42.331 18:26:42 version -- app/version.sh@17 -- # get_header_version major 00:08:42.331 18:26:42 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:42.331 18:26:42 version -- app/version.sh@14 -- # cut -f2 00:08:42.331 18:26:42 version -- app/version.sh@14 -- # tr -d '"' 00:08:42.331 18:26:42 version -- app/version.sh@17 -- # major=24 00:08:42.331 18:26:42 version -- app/version.sh@18 -- # get_header_version minor 00:08:42.331 18:26:42 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:42.331 18:26:42 version -- app/version.sh@14 -- # cut -f2 00:08:42.331 18:26:42 version -- app/version.sh@14 -- # tr -d '"' 00:08:42.331 18:26:42 version -- app/version.sh@18 -- # minor=5 00:08:42.331 18:26:42 version -- app/version.sh@19 -- # get_header_version patch 00:08:42.331 18:26:42 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:42.331 18:26:42 version -- app/version.sh@14 -- # cut -f2 00:08:42.331 18:26:42 version -- app/version.sh@14 -- # tr -d '"' 00:08:42.331 18:26:42 version -- app/version.sh@19 -- # patch=1 00:08:42.331 18:26:42 version -- app/version.sh@20 -- # get_header_version suffix 00:08:42.331 18:26:42 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:42.331 18:26:42 version -- app/version.sh@14 -- # cut -f2 00:08:42.331 18:26:42 version -- app/version.sh@14 -- # tr -d '"' 00:08:42.331 18:26:42 version -- app/version.sh@20 -- # suffix=-pre 00:08:42.331 18:26:42 version -- app/version.sh@22 -- # version=24.5 00:08:42.331 18:26:42 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:42.331 18:26:42 version -- app/version.sh@25 -- # version=24.5.1 00:08:42.331 18:26:42 version -- app/version.sh@28 -- # version=24.5.1rc0 00:08:42.331 18:26:42 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:42.331 18:26:42 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:42.331 18:26:42 version -- app/version.sh@30 -- # py_version=24.5.1rc0 00:08:42.331 18:26:42 version -- app/version.sh@31 -- # [[ 24.5.1rc0 == \2\4\.\5\.\1\r\c\0 ]] 00:08:42.331 00:08:42.331 real 0m0.213s 00:08:42.331 user 0m0.112s 00:08:42.331 sys 0m0.151s 00:08:42.331 18:26:42 version -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:42.331 18:26:42 version -- common/autotest_common.sh@10 -- # set +x 00:08:42.331 ************************************ 00:08:42.331 END TEST version 00:08:42.331 ************************************ 00:08:42.331 18:26:42 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:08:42.591 18:26:42 -- spdk/autotest.sh@198 -- # uname -s 00:08:42.591 18:26:42 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:08:42.591 18:26:42 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:08:42.591 18:26:42 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:08:42.591 18:26:42 -- spdk/autotest.sh@211 -- # '[' 1 -eq 1 ']' 00:08:42.591 18:26:42 -- spdk/autotest.sh@212 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:08:42.591 18:26:42 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:42.591 18:26:42 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:42.591 18:26:42 -- common/autotest_common.sh@10 -- # set +x 00:08:42.591 ************************************ 00:08:42.591 START TEST blockdev_nvme 00:08:42.591 ************************************ 00:08:42.591 18:26:42 blockdev_nvme -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:08:42.591 * Looking for test storage... 00:08:42.591 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:08:42.591 18:26:42 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:42.591 18:26:42 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:08:42.591 18:26:42 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:08:42.591 18:26:42 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:42.591 18:26:42 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:08:42.591 18:26:42 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:08:42.591 18:26:42 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:08:42.591 18:26:42 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:08:42.591 18:26:42 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:08:42.591 18:26:42 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:08:42.591 18:26:42 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:08:42.591 18:26:42 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:08:42.591 18:26:42 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:08:42.591 18:26:42 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:08:42.591 18:26:42 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:08:42.591 18:26:42 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:08:42.591 18:26:42 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:08:42.591 18:26:42 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:08:42.591 18:26:42 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:08:42.591 18:26:42 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:08:42.591 18:26:42 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:08:42.591 18:26:42 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:08:42.591 18:26:42 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:08:42.591 18:26:42 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:08:42.591 18:26:42 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=77622 00:08:42.591 18:26:42 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:42.591 18:26:42 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 77622 00:08:42.591 18:26:42 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:42.591 18:26:42 blockdev_nvme -- common/autotest_common.sh@827 -- # '[' -z 77622 ']' 00:08:42.591 18:26:42 blockdev_nvme -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:42.591 18:26:42 blockdev_nvme -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:42.591 18:26:42 blockdev_nvme -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:42.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:42.591 18:26:42 blockdev_nvme -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:42.591 18:26:42 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:42.591 [2024-07-23 18:26:42.641105] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:08:42.591 [2024-07-23 18:26:42.641387] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77622 ] 00:08:42.850 [2024-07-23 18:26:42.780165] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.850 [2024-07-23 18:26:42.827760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.419 18:26:43 blockdev_nvme -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:43.419 18:26:43 blockdev_nvme -- common/autotest_common.sh@860 -- # return 0 00:08:43.419 18:26:43 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:08:43.419 18:26:43 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:08:43.419 18:26:43 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:08:43.419 18:26:43 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:08:43.419 18:26:43 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:43.678 18:26:43 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:08:43.678 18:26:43 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.678 18:26:43 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:43.937 18:26:43 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.937 18:26:43 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:08:43.937 18:26:43 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.937 18:26:43 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:43.937 18:26:43 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.937 18:26:43 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:08:43.937 18:26:43 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:08:43.937 18:26:43 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.937 18:26:43 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:43.937 18:26:43 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.937 18:26:43 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:08:43.937 18:26:43 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.937 18:26:43 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:43.937 18:26:43 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.937 18:26:43 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:08:43.937 18:26:43 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.937 18:26:43 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:43.937 18:26:43 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.937 18:26:43 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:08:43.937 18:26:43 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:08:43.937 18:26:43 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.937 18:26:43 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:43.937 18:26:43 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:08:43.937 18:26:43 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.937 18:26:43 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:08:43.937 18:26:43 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:08:43.938 18:26:43 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "df519ffb-fd3a-4e48-91c0-f574113773f8"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "df519ffb-fd3a-4e48-91c0-f574113773f8",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "c31fc4b6-cb67-440b-a52d-0fc8bd53879e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "c31fc4b6-cb67-440b-a52d-0fc8bd53879e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "a9bedcb9-1dca-4d76-bab3-fd8d4cb002b3"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a9bedcb9-1dca-4d76-bab3-fd8d4cb002b3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "bae21904-fd4f-4654-8e95-3dd63b1970f3"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "bae21904-fd4f-4654-8e95-3dd63b1970f3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "854a9d8c-5325-45b7-a464-f428929b8816"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "854a9d8c-5325-45b7-a464-f428929b8816",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "4410ec39-adac-44aa-89ef-46188f58135e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "4410ec39-adac-44aa-89ef-46188f58135e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:08:44.196 18:26:44 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:08:44.196 18:26:44 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:08:44.196 18:26:44 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:08:44.196 18:26:44 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 77622 00:08:44.196 18:26:44 blockdev_nvme -- common/autotest_common.sh@946 -- # '[' -z 77622 ']' 00:08:44.196 18:26:44 blockdev_nvme -- common/autotest_common.sh@950 -- # kill -0 77622 00:08:44.196 18:26:44 blockdev_nvme -- common/autotest_common.sh@951 -- # uname 00:08:44.196 18:26:44 blockdev_nvme -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:44.196 18:26:44 blockdev_nvme -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 77622 00:08:44.196 killing process with pid 77622 00:08:44.196 18:26:44 blockdev_nvme -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:44.196 18:26:44 blockdev_nvme -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:44.196 18:26:44 blockdev_nvme -- common/autotest_common.sh@964 -- # echo 'killing process with pid 77622' 00:08:44.196 18:26:44 blockdev_nvme -- common/autotest_common.sh@965 -- # kill 77622 00:08:44.196 18:26:44 blockdev_nvme -- common/autotest_common.sh@970 -- # wait 77622 00:08:44.764 18:26:44 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:44.764 18:26:44 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:44.764 18:26:44 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:08:44.764 18:26:44 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:44.764 18:26:44 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:44.764 ************************************ 00:08:44.764 START TEST bdev_hello_world 00:08:44.764 ************************************ 00:08:44.764 18:26:44 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:44.764 [2024-07-23 18:26:44.785834] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:08:44.764 [2024-07-23 18:26:44.786057] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77695 ] 00:08:45.023 [2024-07-23 18:26:44.918974] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.023 [2024-07-23 18:26:45.008726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.592 [2024-07-23 18:26:45.443803] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:08:45.592 [2024-07-23 18:26:45.443972] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:08:45.592 [2024-07-23 18:26:45.444035] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:08:45.592 [2024-07-23 18:26:45.446741] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:08:45.592 [2024-07-23 18:26:45.447118] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:08:45.592 [2024-07-23 18:26:45.447188] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:08:45.592 [2024-07-23 18:26:45.447385] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:08:45.592 00:08:45.592 [2024-07-23 18:26:45.447460] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:08:45.851 00:08:45.851 real 0m1.110s 00:08:45.851 user 0m0.729s 00:08:45.851 sys 0m0.279s 00:08:45.851 18:26:45 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:45.851 18:26:45 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:45.851 ************************************ 00:08:45.851 END TEST bdev_hello_world 00:08:45.851 ************************************ 00:08:45.851 18:26:45 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:08:45.851 18:26:45 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:45.851 18:26:45 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:45.851 18:26:45 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:45.851 ************************************ 00:08:45.851 START TEST bdev_bounds 00:08:45.851 ************************************ 00:08:45.851 18:26:45 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1121 -- # bdev_bounds '' 00:08:45.851 18:26:45 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=77726 00:08:45.851 18:26:45 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:08:45.851 Process bdevio pid: 77726 00:08:45.851 18:26:45 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 77726' 00:08:45.852 18:26:45 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 77726 00:08:45.852 18:26:45 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:45.852 18:26:45 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@827 -- # '[' -z 77726 ']' 00:08:45.852 18:26:45 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:45.852 18:26:45 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:45.852 18:26:45 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:45.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:45.852 18:26:45 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:45.852 18:26:45 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:46.111 [2024-07-23 18:26:45.965531] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:08:46.111 [2024-07-23 18:26:45.965799] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77726 ] 00:08:46.111 [2024-07-23 18:26:46.102304] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:46.369 [2024-07-23 18:26:46.212025] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:46.369 [2024-07-23 18:26:46.212236] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:46.369 [2024-07-23 18:26:46.212135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.938 18:26:46 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:46.938 18:26:46 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@860 -- # return 0 00:08:46.938 18:26:46 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:08:46.938 I/O targets: 00:08:46.938 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:08:46.938 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:08:46.938 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:46.938 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:46.938 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:46.938 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:08:46.938 00:08:46.938 00:08:46.938 CUnit - A unit testing framework for C - Version 2.1-3 00:08:46.938 http://cunit.sourceforge.net/ 00:08:46.938 00:08:46.938 00:08:46.938 Suite: bdevio tests on: Nvme3n1 00:08:46.938 Test: blockdev write read block ...passed 00:08:46.938 Test: blockdev write zeroes read block ...passed 00:08:46.938 Test: blockdev write zeroes read no split ...passed 00:08:46.938 Test: blockdev write zeroes read split ...passed 00:08:46.938 Test: blockdev write zeroes read split partial ...passed 00:08:46.938 Test: blockdev reset ...[2024-07-23 18:26:46.911827] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:08:46.938 passed 00:08:46.938 Test: blockdev write read 8 blocks ...[2024-07-23 18:26:46.914153] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:46.938 passed 00:08:46.938 Test: blockdev write read size > 128k ...passed 00:08:46.938 Test: blockdev write read invalid size ...passed 00:08:46.938 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:46.938 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:46.938 Test: blockdev write read max offset ...passed 00:08:46.938 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:46.938 Test: blockdev writev readv 8 blocks ...passed 00:08:46.938 Test: blockdev writev readv 30 x 1block ...passed 00:08:46.938 Test: blockdev writev readv block ...passed 00:08:46.938 Test: blockdev writev readv size > 128k ...passed 00:08:46.938 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:46.938 Test: blockdev comparev and writev ...[2024-07-23 18:26:46.920300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b8c0e000 len:0x1000 00:08:46.938 [2024-07-23 18:26:46.920365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:46.938 passed 00:08:46.938 Test: blockdev nvme passthru rw ...passed 00:08:46.938 Test: blockdev nvme passthru vendor specific ...passed 00:08:46.938 Test: blockdev nvme admin passthru ...[2024-07-23 18:26:46.921043] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:46.938 [2024-07-23 18:26:46.921100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:46.938 passed 00:08:46.938 Test: blockdev copy ...passed 00:08:46.938 Suite: bdevio tests on: Nvme2n3 00:08:46.938 Test: blockdev write read block ...passed 00:08:46.938 Test: blockdev write zeroes read block ...passed 00:08:46.938 Test: blockdev write zeroes read no split ...passed 00:08:46.938 Test: blockdev write zeroes read split ...passed 00:08:46.938 Test: blockdev write zeroes read split partial ...passed 00:08:46.938 Test: blockdev reset ...[2024-07-23 18:26:46.936870] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:08:46.938 passed 00:08:46.938 Test: blockdev write read 8 blocks ...[2024-07-23 18:26:46.939310] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:46.938 passed 00:08:46.938 Test: blockdev write read size > 128k ...passed 00:08:46.938 Test: blockdev write read invalid size ...passed 00:08:46.938 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:46.938 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:46.938 Test: blockdev write read max offset ...passed 00:08:46.939 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:46.939 Test: blockdev writev readv 8 blocks ...passed 00:08:46.939 Test: blockdev writev readv 30 x 1block ...passed 00:08:46.939 Test: blockdev writev readv block ...passed 00:08:46.939 Test: blockdev writev readv size > 128k ...passed 00:08:46.939 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:46.939 Test: blockdev comparev and writev ...[2024-07-23 18:26:46.945228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b8c0a000 len:0x1000 00:08:46.939 [2024-07-23 18:26:46.945280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:46.939 passed 00:08:46.939 Test: blockdev nvme passthru rw ...passed 00:08:46.939 Test: blockdev nvme passthru vendor specific ...passed 00:08:46.939 Test: blockdev nvme admin passthru ...[2024-07-23 18:26:46.946034] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:46.939 [2024-07-23 18:26:46.946073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:46.939 passed 00:08:46.939 Test: blockdev copy ...passed 00:08:46.939 Suite: bdevio tests on: Nvme2n2 00:08:46.939 Test: blockdev write read block ...passed 00:08:46.939 Test: blockdev write zeroes read block ...passed 00:08:46.939 Test: blockdev write zeroes read no split ...passed 00:08:46.939 Test: blockdev write zeroes read split ...passed 00:08:46.939 Test: blockdev write zeroes read split partial ...passed 00:08:46.939 Test: blockdev reset ...[2024-07-23 18:26:46.963945] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:08:46.939 [2024-07-23 18:26:46.966429] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:46.939 passed 00:08:46.939 Test: blockdev write read 8 blocks ...passed 00:08:46.939 Test: blockdev write read size > 128k ...passed 00:08:46.939 Test: blockdev write read invalid size ...passed 00:08:46.939 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:46.939 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:46.939 Test: blockdev write read max offset ...passed 00:08:46.939 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:46.939 Test: blockdev writev readv 8 blocks ...passed 00:08:46.939 Test: blockdev writev readv 30 x 1block ...passed 00:08:46.939 Test: blockdev writev readv block ...passed 00:08:46.939 Test: blockdev writev readv size > 128k ...passed 00:08:46.939 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:46.939 Test: blockdev comparev and writev ...[2024-07-23 18:26:46.973394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 passed 00:08:46.939 Test: blockdev nvme passthru rw ...passed 00:08:46.939 Test: blockdev nvme passthru vendor specific ...SGL DATA BLOCK ADDRESS 0x2b8c06000 len:0x1000 00:08:46.939 [2024-07-23 18:26:46.973503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:46.939 passed 00:08:46.939 Test: blockdev nvme admin passthru ...[2024-07-23 18:26:46.974186] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:46.939 [2024-07-23 18:26:46.974224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:46.939 passed 00:08:46.939 Test: blockdev copy ...passed 00:08:46.939 Suite: bdevio tests on: Nvme2n1 00:08:46.939 Test: blockdev write read block ...passed 00:08:46.939 Test: blockdev write zeroes read block ...passed 00:08:47.198 Test: blockdev write zeroes read no split ...passed 00:08:47.198 Test: blockdev write zeroes read split ...passed 00:08:47.198 Test: blockdev write zeroes read split partial ...passed 00:08:47.198 Test: blockdev reset ...[2024-07-23 18:26:47.004987] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:08:47.198 [2024-07-23 18:26:47.007459] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:47.198 passed 00:08:47.198 Test: blockdev write read 8 blocks ...passed 00:08:47.198 Test: blockdev write read size > 128k ...passed 00:08:47.198 Test: blockdev write read invalid size ...passed 00:08:47.198 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:47.198 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:47.198 Test: blockdev write read max offset ...passed 00:08:47.198 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:47.198 Test: blockdev writev readv 8 blocks ...passed 00:08:47.198 Test: blockdev writev readv 30 x 1block ...passed 00:08:47.198 Test: blockdev writev readv block ...passed 00:08:47.198 Test: blockdev writev readv size > 128k ...passed 00:08:47.198 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:47.198 Test: blockdev comparev and writev ...[2024-07-23 18:26:47.015221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b8c02000 len:0x1000 00:08:47.198 [2024-07-23 18:26:47.015343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:47.198 passed 00:08:47.198 Test: blockdev nvme passthru rw ...passed 00:08:47.198 Test: blockdev nvme passthru vendor specific ...[2024-07-23 18:26:47.016384] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:47.198 [2024-07-23 18:26:47.016477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:47.198 passed 00:08:47.198 Test: blockdev nvme admin passthru ...passed 00:08:47.198 Test: blockdev copy ...passed 00:08:47.198 Suite: bdevio tests on: Nvme1n1 00:08:47.198 Test: blockdev write read block ...passed 00:08:47.198 Test: blockdev write zeroes read block ...passed 00:08:47.198 Test: blockdev write zeroes read no split ...passed 00:08:47.198 Test: blockdev write zeroes read split ...passed 00:08:47.198 Test: blockdev write zeroes read split partial ...passed 00:08:47.198 Test: blockdev reset ...[2024-07-23 18:26:47.046618] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:08:47.198 [2024-07-23 18:26:47.048955] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:47.199 passed 00:08:47.199 Test: blockdev write read 8 blocks ...passed 00:08:47.199 Test: blockdev write read size > 128k ...passed 00:08:47.199 Test: blockdev write read invalid size ...passed 00:08:47.199 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:47.199 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:47.199 Test: blockdev write read max offset ...passed 00:08:47.199 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:47.199 Test: blockdev writev readv 8 blocks ...passed 00:08:47.199 Test: blockdev writev readv 30 x 1block ...passed 00:08:47.199 Test: blockdev writev readv block ...passed 00:08:47.199 Test: blockdev writev readv size > 128k ...passed 00:08:47.199 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:47.199 Test: blockdev comparev and writev ...[2024-07-23 18:26:47.056865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b380e000 len:0x1000 00:08:47.199 [2024-07-23 18:26:47.056982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:47.199 passed 00:08:47.199 Test: blockdev nvme passthru rw ...passed 00:08:47.199 Test: blockdev nvme passthru vendor specific ...[2024-07-23 18:26:47.058024] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:47.199 [2024-07-23 18:26:47.058117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:47.199 passed 00:08:47.199 Test: blockdev nvme admin passthru ...passed 00:08:47.199 Test: blockdev copy ...passed 00:08:47.199 Suite: bdevio tests on: Nvme0n1 00:08:47.199 Test: blockdev write read block ...passed 00:08:47.199 Test: blockdev write zeroes read block ...passed 00:08:47.199 Test: blockdev write zeroes read no split ...passed 00:08:47.199 Test: blockdev write zeroes read split ...passed 00:08:47.199 Test: blockdev write zeroes read split partial ...passed 00:08:47.199 Test: blockdev reset ...[2024-07-23 18:26:47.087872] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:08:47.199 passed 00:08:47.199 Test: blockdev write read 8 blocks ...[2024-07-23 18:26:47.090106] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:47.199 passed 00:08:47.199 Test: blockdev write read size > 128k ...passed 00:08:47.199 Test: blockdev write read invalid size ...passed 00:08:47.199 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:47.199 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:47.199 Test: blockdev write read max offset ...passed 00:08:47.199 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:47.199 Test: blockdev writev readv 8 blocks ...passed 00:08:47.199 Test: blockdev writev readv 30 x 1block ...passed 00:08:47.199 Test: blockdev writev readv block ...passed 00:08:47.199 Test: blockdev writev readv size > 128k ...passed 00:08:47.199 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:47.199 Test: blockdev comparev and writev ...passed 00:08:47.199 Test: blockdev nvme passthru rw ...[2024-07-23 18:26:47.095915] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:08:47.199 separate metadata which is not supported yet. 00:08:47.199 passed 00:08:47.199 Test: blockdev nvme passthru vendor specific ...passed 00:08:47.199 Test: blockdev nvme admin passthru ...[2024-07-23 18:26:47.096448] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:08:47.199 [2024-07-23 18:26:47.096501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:08:47.199 passed 00:08:47.199 Test: blockdev copy ...passed 00:08:47.199 00:08:47.199 Run Summary: Type Total Ran Passed Failed Inactive 00:08:47.199 suites 6 6 n/a 0 0 00:08:47.199 tests 138 138 138 0 0 00:08:47.199 asserts 893 893 893 0 n/a 00:08:47.199 00:08:47.199 Elapsed time = 0.471 seconds 00:08:47.199 0 00:08:47.199 18:26:47 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 77726 00:08:47.199 18:26:47 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@946 -- # '[' -z 77726 ']' 00:08:47.199 18:26:47 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@950 -- # kill -0 77726 00:08:47.199 18:26:47 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@951 -- # uname 00:08:47.199 18:26:47 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:47.199 18:26:47 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 77726 00:08:47.199 18:26:47 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:47.199 18:26:47 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:47.199 18:26:47 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # echo 'killing process with pid 77726' 00:08:47.199 killing process with pid 77726 00:08:47.199 18:26:47 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@965 -- # kill 77726 00:08:47.199 18:26:47 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@970 -- # wait 77726 00:08:47.458 18:26:47 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:08:47.458 00:08:47.458 real 0m1.568s 00:08:47.458 user 0m3.616s 00:08:47.458 sys 0m0.423s 00:08:47.458 18:26:47 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:47.458 ************************************ 00:08:47.458 END TEST bdev_bounds 00:08:47.458 ************************************ 00:08:47.458 18:26:47 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:47.458 18:26:47 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:47.458 18:26:47 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:08:47.458 18:26:47 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:47.458 18:26:47 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:47.717 ************************************ 00:08:47.717 START TEST bdev_nbd 00:08:47.717 ************************************ 00:08:47.717 18:26:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1121 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:47.717 18:26:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:08:47.717 18:26:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:08:47.717 18:26:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:47.717 18:26:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:47.717 18:26:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:47.717 18:26:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:08:47.717 18:26:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:08:47.717 18:26:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:08:47.717 18:26:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:47.717 18:26:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:08:47.717 18:26:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:08:47.717 18:26:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:47.717 18:26:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:08:47.717 18:26:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:47.717 18:26:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:08:47.717 18:26:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=77780 00:08:47.717 18:26:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:47.717 18:26:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:08:47.717 18:26:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 77780 /var/tmp/spdk-nbd.sock 00:08:47.717 18:26:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@827 -- # '[' -z 77780 ']' 00:08:47.717 18:26:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:47.717 18:26:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:47.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:47.717 18:26:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:47.717 18:26:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:47.717 18:26:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:47.717 [2024-07-23 18:26:47.603383] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:08:47.717 [2024-07-23 18:26:47.603515] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:47.717 [2024-07-23 18:26:47.753364] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.977 [2024-07-23 18:26:47.835156] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.544 18:26:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:48.544 18:26:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@860 -- # return 0 00:08:48.544 18:26:48 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:48.544 18:26:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:48.544 18:26:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:48.544 18:26:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:08:48.544 18:26:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:48.544 18:26:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:48.544 18:26:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:48.544 18:26:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:08:48.544 18:26:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:08:48.544 18:26:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:08:48.544 18:26:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:08:48.544 18:26:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:48.544 18:26:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:08:48.802 18:26:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:08:48.802 18:26:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:08:48.802 18:26:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:08:48.802 18:26:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:08:48.802 18:26:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:48.802 18:26:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:48.802 18:26:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:48.802 18:26:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:08:48.802 18:26:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:48.802 18:26:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:48.802 18:26:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:48.802 18:26:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:48.802 1+0 records in 00:08:48.802 1+0 records out 00:08:48.802 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000426271 s, 9.6 MB/s 00:08:48.802 18:26:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:48.802 18:26:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:48.802 18:26:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:48.802 18:26:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:48.802 18:26:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:48.802 18:26:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:48.802 18:26:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:48.802 18:26:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:08:49.061 18:26:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:08:49.061 18:26:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:08:49.061 18:26:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:08:49.061 18:26:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:08:49.061 18:26:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:49.061 18:26:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:49.061 18:26:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:49.061 18:26:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:08:49.061 18:26:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:49.061 18:26:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:49.061 18:26:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:49.061 18:26:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:49.061 1+0 records in 00:08:49.061 1+0 records out 00:08:49.061 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000513523 s, 8.0 MB/s 00:08:49.061 18:26:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:49.061 18:26:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:49.061 18:26:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:49.061 18:26:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:49.061 18:26:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:49.061 18:26:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:49.061 18:26:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:49.061 18:26:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:08:49.320 18:26:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:08:49.320 18:26:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:08:49.320 18:26:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:08:49.320 18:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd2 00:08:49.320 18:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:49.320 18:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:49.320 18:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:49.320 18:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd2 /proc/partitions 00:08:49.320 18:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:49.320 18:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:49.320 18:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:49.320 18:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:49.320 1+0 records in 00:08:49.320 1+0 records out 00:08:49.320 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000402163 s, 10.2 MB/s 00:08:49.320 18:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:49.320 18:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:49.320 18:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:49.320 18:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:49.320 18:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:49.320 18:26:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:49.320 18:26:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:49.320 18:26:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:08:49.320 18:26:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:08:49.320 18:26:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:08:49.320 18:26:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:08:49.320 18:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd3 00:08:49.320 18:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:49.320 18:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:49.320 18:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:49.320 18:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd3 /proc/partitions 00:08:49.320 18:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:49.320 18:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:49.320 18:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:49.320 18:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:49.320 1+0 records in 00:08:49.320 1+0 records out 00:08:49.320 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000530383 s, 7.7 MB/s 00:08:49.320 18:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:49.320 18:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:49.320 18:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:49.320 18:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:49.320 18:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:49.320 18:26:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:49.320 18:26:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:49.579 18:26:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:08:49.579 18:26:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:08:49.579 18:26:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:08:49.579 18:26:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:08:49.579 18:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd4 00:08:49.579 18:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:49.579 18:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:49.579 18:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:49.579 18:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd4 /proc/partitions 00:08:49.579 18:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:49.579 18:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:49.579 18:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:49.579 18:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:49.579 1+0 records in 00:08:49.579 1+0 records out 00:08:49.579 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000447637 s, 9.2 MB/s 00:08:49.579 18:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:49.579 18:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:49.579 18:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:49.579 18:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:49.579 18:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:49.579 18:26:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:49.579 18:26:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:49.579 18:26:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:08:49.838 18:26:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:08:49.838 18:26:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:08:49.838 18:26:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:08:49.838 18:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd5 00:08:49.838 18:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:49.838 18:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:49.838 18:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:49.838 18:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd5 /proc/partitions 00:08:49.838 18:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:49.838 18:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:49.838 18:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:49.838 18:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:49.838 1+0 records in 00:08:49.838 1+0 records out 00:08:49.838 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000922966 s, 4.4 MB/s 00:08:49.838 18:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:49.838 18:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:49.838 18:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:49.838 18:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:49.838 18:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:49.838 18:26:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:49.838 18:26:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:49.838 18:26:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:50.096 18:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:08:50.096 { 00:08:50.097 "nbd_device": "/dev/nbd0", 00:08:50.097 "bdev_name": "Nvme0n1" 00:08:50.097 }, 00:08:50.097 { 00:08:50.097 "nbd_device": "/dev/nbd1", 00:08:50.097 "bdev_name": "Nvme1n1" 00:08:50.097 }, 00:08:50.097 { 00:08:50.097 "nbd_device": "/dev/nbd2", 00:08:50.097 "bdev_name": "Nvme2n1" 00:08:50.097 }, 00:08:50.097 { 00:08:50.097 "nbd_device": "/dev/nbd3", 00:08:50.097 "bdev_name": "Nvme2n2" 00:08:50.097 }, 00:08:50.097 { 00:08:50.097 "nbd_device": "/dev/nbd4", 00:08:50.097 "bdev_name": "Nvme2n3" 00:08:50.097 }, 00:08:50.097 { 00:08:50.097 "nbd_device": "/dev/nbd5", 00:08:50.097 "bdev_name": "Nvme3n1" 00:08:50.097 } 00:08:50.097 ]' 00:08:50.097 18:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:08:50.097 18:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:08:50.097 { 00:08:50.097 "nbd_device": "/dev/nbd0", 00:08:50.097 "bdev_name": "Nvme0n1" 00:08:50.097 }, 00:08:50.097 { 00:08:50.097 "nbd_device": "/dev/nbd1", 00:08:50.097 "bdev_name": "Nvme1n1" 00:08:50.097 }, 00:08:50.097 { 00:08:50.097 "nbd_device": "/dev/nbd2", 00:08:50.097 "bdev_name": "Nvme2n1" 00:08:50.097 }, 00:08:50.097 { 00:08:50.097 "nbd_device": "/dev/nbd3", 00:08:50.097 "bdev_name": "Nvme2n2" 00:08:50.097 }, 00:08:50.097 { 00:08:50.097 "nbd_device": "/dev/nbd4", 00:08:50.097 "bdev_name": "Nvme2n3" 00:08:50.097 }, 00:08:50.097 { 00:08:50.097 "nbd_device": "/dev/nbd5", 00:08:50.097 "bdev_name": "Nvme3n1" 00:08:50.097 } 00:08:50.097 ]' 00:08:50.097 18:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:08:50.355 18:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:08:50.355 18:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:50.356 18:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:08:50.356 18:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:50.356 18:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:50.356 18:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:50.356 18:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:50.356 18:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:50.356 18:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:50.356 18:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:50.356 18:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:50.356 18:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:50.356 18:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:50.356 18:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:50.356 18:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:50.356 18:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:50.356 18:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:50.614 18:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:50.614 18:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:50.614 18:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:50.614 18:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:50.614 18:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:50.614 18:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:50.614 18:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:50.614 18:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:50.614 18:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:50.614 18:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:08:50.872 18:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:08:50.872 18:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:08:50.872 18:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:08:50.872 18:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:50.872 18:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:50.872 18:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:08:50.872 18:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:50.872 18:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:50.872 18:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:50.873 18:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:08:51.131 18:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:08:51.131 18:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:08:51.131 18:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:08:51.131 18:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:51.131 18:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:51.131 18:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:08:51.131 18:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:51.131 18:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:51.131 18:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:51.131 18:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:08:51.131 18:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:08:51.131 18:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:08:51.131 18:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:08:51.131 18:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:51.131 18:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:51.131 18:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:08:51.131 18:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:51.131 18:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:51.131 18:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:51.131 18:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:08:51.397 18:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:08:51.397 18:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:08:51.397 18:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:08:51.397 18:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:51.397 18:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:51.397 18:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:08:51.397 18:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:51.397 18:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:51.397 18:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:51.397 18:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:51.397 18:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:51.665 18:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:51.665 18:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:51.665 18:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:51.665 18:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:51.665 18:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:51.665 18:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:51.665 18:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:51.665 18:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:51.665 18:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:51.665 18:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:08:51.665 18:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:08:51.665 18:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:08:51.665 18:26:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:51.665 18:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:51.665 18:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:51.665 18:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:51.665 18:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:51.665 18:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:51.665 18:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:51.665 18:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:51.665 18:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:51.665 18:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:51.665 18:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:51.665 18:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:51.666 18:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:08:51.666 18:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:51.666 18:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:51.666 18:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:08:51.924 /dev/nbd0 00:08:51.924 18:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:51.924 18:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:51.924 18:26:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:08:51.924 18:26:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:51.924 18:26:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:51.924 18:26:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:51.924 18:26:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:08:51.924 18:26:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:51.924 18:26:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:51.924 18:26:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:51.924 18:26:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:51.924 1+0 records in 00:08:51.924 1+0 records out 00:08:51.924 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000531837 s, 7.7 MB/s 00:08:51.924 18:26:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:51.924 18:26:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:51.924 18:26:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:51.924 18:26:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:51.924 18:26:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:51.924 18:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:51.924 18:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:51.924 18:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:08:52.183 /dev/nbd1 00:08:52.183 18:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:52.183 18:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:52.183 18:26:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:08:52.183 18:26:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:52.183 18:26:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:52.183 18:26:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:52.183 18:26:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:08:52.183 18:26:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:52.183 18:26:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:52.183 18:26:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:52.183 18:26:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:52.183 1+0 records in 00:08:52.183 1+0 records out 00:08:52.183 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000602063 s, 6.8 MB/s 00:08:52.183 18:26:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:52.184 18:26:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:52.184 18:26:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:52.184 18:26:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:52.184 18:26:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:52.184 18:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:52.184 18:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:52.184 18:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:08:52.443 /dev/nbd10 00:08:52.443 18:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:08:52.443 18:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:08:52.443 18:26:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd10 00:08:52.443 18:26:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:52.443 18:26:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:52.443 18:26:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:52.443 18:26:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd10 /proc/partitions 00:08:52.443 18:26:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:52.443 18:26:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:52.443 18:26:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:52.443 18:26:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:52.443 1+0 records in 00:08:52.443 1+0 records out 00:08:52.443 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000543559 s, 7.5 MB/s 00:08:52.443 18:26:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:52.443 18:26:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:52.443 18:26:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:52.443 18:26:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:52.443 18:26:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:52.443 18:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:52.443 18:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:52.443 18:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:08:52.443 /dev/nbd11 00:08:52.443 18:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:08:52.702 18:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:08:52.702 18:26:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd11 00:08:52.702 18:26:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:52.702 18:26:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:52.702 18:26:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:52.702 18:26:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd11 /proc/partitions 00:08:52.702 18:26:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:52.702 18:26:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:52.702 18:26:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:52.702 18:26:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:52.702 1+0 records in 00:08:52.702 1+0 records out 00:08:52.702 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0004354 s, 9.4 MB/s 00:08:52.702 18:26:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:52.702 18:26:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:52.702 18:26:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:52.702 18:26:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:52.702 18:26:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:52.702 18:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:52.702 18:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:52.702 18:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:08:52.702 /dev/nbd12 00:08:52.702 18:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:08:52.962 18:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:08:52.962 18:26:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd12 00:08:52.962 18:26:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:52.962 18:26:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:52.962 18:26:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:52.962 18:26:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd12 /proc/partitions 00:08:52.962 18:26:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:52.962 18:26:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:52.962 18:26:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:52.962 18:26:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:52.962 1+0 records in 00:08:52.962 1+0 records out 00:08:52.962 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000713972 s, 5.7 MB/s 00:08:52.962 18:26:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:52.962 18:26:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:52.962 18:26:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:52.962 18:26:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:52.962 18:26:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:52.962 18:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:52.962 18:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:52.962 18:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:08:52.962 /dev/nbd13 00:08:52.962 18:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:08:52.962 18:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:08:52.962 18:26:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd13 00:08:52.962 18:26:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:52.962 18:26:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:52.962 18:26:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:52.962 18:26:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd13 /proc/partitions 00:08:52.962 18:26:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:52.962 18:26:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:52.962 18:26:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:52.962 18:26:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:52.962 1+0 records in 00:08:52.962 1+0 records out 00:08:52.962 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000649542 s, 6.3 MB/s 00:08:52.962 18:26:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:52.962 18:26:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:52.962 18:26:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:52.962 18:26:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:52.962 18:26:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:52.962 18:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:52.962 18:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:52.962 18:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:52.962 18:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:53.222 18:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:53.222 18:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:53.222 { 00:08:53.222 "nbd_device": "/dev/nbd0", 00:08:53.222 "bdev_name": "Nvme0n1" 00:08:53.222 }, 00:08:53.222 { 00:08:53.222 "nbd_device": "/dev/nbd1", 00:08:53.222 "bdev_name": "Nvme1n1" 00:08:53.222 }, 00:08:53.222 { 00:08:53.222 "nbd_device": "/dev/nbd10", 00:08:53.222 "bdev_name": "Nvme2n1" 00:08:53.222 }, 00:08:53.222 { 00:08:53.222 "nbd_device": "/dev/nbd11", 00:08:53.222 "bdev_name": "Nvme2n2" 00:08:53.222 }, 00:08:53.222 { 00:08:53.222 "nbd_device": "/dev/nbd12", 00:08:53.222 "bdev_name": "Nvme2n3" 00:08:53.222 }, 00:08:53.222 { 00:08:53.222 "nbd_device": "/dev/nbd13", 00:08:53.222 "bdev_name": "Nvme3n1" 00:08:53.222 } 00:08:53.222 ]' 00:08:53.222 18:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:53.222 { 00:08:53.222 "nbd_device": "/dev/nbd0", 00:08:53.222 "bdev_name": "Nvme0n1" 00:08:53.222 }, 00:08:53.222 { 00:08:53.222 "nbd_device": "/dev/nbd1", 00:08:53.222 "bdev_name": "Nvme1n1" 00:08:53.222 }, 00:08:53.222 { 00:08:53.222 "nbd_device": "/dev/nbd10", 00:08:53.222 "bdev_name": "Nvme2n1" 00:08:53.222 }, 00:08:53.222 { 00:08:53.222 "nbd_device": "/dev/nbd11", 00:08:53.222 "bdev_name": "Nvme2n2" 00:08:53.222 }, 00:08:53.222 { 00:08:53.222 "nbd_device": "/dev/nbd12", 00:08:53.222 "bdev_name": "Nvme2n3" 00:08:53.222 }, 00:08:53.222 { 00:08:53.222 "nbd_device": "/dev/nbd13", 00:08:53.222 "bdev_name": "Nvme3n1" 00:08:53.222 } 00:08:53.222 ]' 00:08:53.222 18:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:53.484 18:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:53.484 /dev/nbd1 00:08:53.484 /dev/nbd10 00:08:53.484 /dev/nbd11 00:08:53.484 /dev/nbd12 00:08:53.484 /dev/nbd13' 00:08:53.484 18:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:53.485 18:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:53.485 /dev/nbd1 00:08:53.485 /dev/nbd10 00:08:53.485 /dev/nbd11 00:08:53.485 /dev/nbd12 00:08:53.485 /dev/nbd13' 00:08:53.485 18:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:08:53.485 18:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:08:53.485 18:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:08:53.485 18:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:08:53.485 18:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:08:53.485 18:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:53.485 18:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:53.485 18:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:53.485 18:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:53.485 18:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:53.485 18:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:08:53.485 256+0 records in 00:08:53.485 256+0 records out 00:08:53.485 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0126548 s, 82.9 MB/s 00:08:53.485 18:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:53.485 18:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:53.485 256+0 records in 00:08:53.485 256+0 records out 00:08:53.485 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0919574 s, 11.4 MB/s 00:08:53.485 18:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:53.485 18:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:53.485 256+0 records in 00:08:53.485 256+0 records out 00:08:53.485 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.103794 s, 10.1 MB/s 00:08:53.485 18:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:53.485 18:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:08:53.747 256+0 records in 00:08:53.747 256+0 records out 00:08:53.747 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0962291 s, 10.9 MB/s 00:08:53.747 18:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:53.747 18:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:08:53.747 256+0 records in 00:08:53.747 256+0 records out 00:08:53.747 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0965366 s, 10.9 MB/s 00:08:53.747 18:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:53.747 18:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:08:54.005 256+0 records in 00:08:54.005 256+0 records out 00:08:54.005 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0902617 s, 11.6 MB/s 00:08:54.005 18:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:54.005 18:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:08:54.005 256+0 records in 00:08:54.005 256+0 records out 00:08:54.005 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0986379 s, 10.6 MB/s 00:08:54.005 18:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:08:54.005 18:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:54.005 18:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:54.005 18:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:54.005 18:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:54.005 18:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:54.005 18:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:54.005 18:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:54.005 18:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:08:54.005 18:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:54.005 18:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:08:54.005 18:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:54.005 18:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:08:54.005 18:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:54.005 18:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:08:54.005 18:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:54.005 18:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:08:54.005 18:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:54.005 18:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:08:54.005 18:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:54.005 18:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:54.005 18:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:54.005 18:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:54.005 18:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:54.005 18:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:54.005 18:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:54.005 18:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:54.264 18:26:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:54.264 18:26:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:54.264 18:26:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:54.264 18:26:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:54.264 18:26:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:54.264 18:26:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:54.264 18:26:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:54.264 18:26:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:54.264 18:26:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:54.264 18:26:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:54.523 18:26:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:54.523 18:26:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:54.523 18:26:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:54.523 18:26:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:54.523 18:26:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:54.523 18:26:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:54.523 18:26:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:54.523 18:26:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:54.523 18:26:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:54.523 18:26:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:08:54.523 18:26:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:08:54.523 18:26:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:08:54.523 18:26:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:08:54.523 18:26:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:54.523 18:26:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:54.523 18:26:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:08:54.523 18:26:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:54.523 18:26:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:54.523 18:26:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:54.523 18:26:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:08:54.782 18:26:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:08:54.782 18:26:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:08:54.782 18:26:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:08:54.782 18:26:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:54.782 18:26:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:54.782 18:26:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:08:54.782 18:26:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:54.782 18:26:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:54.782 18:26:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:54.782 18:26:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:08:55.041 18:26:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:08:55.041 18:26:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:08:55.041 18:26:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:08:55.041 18:26:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:55.041 18:26:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:55.041 18:26:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:08:55.041 18:26:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:55.041 18:26:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:55.041 18:26:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:55.041 18:26:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:08:55.300 18:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:08:55.300 18:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:08:55.300 18:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:08:55.300 18:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:55.300 18:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:55.300 18:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:08:55.300 18:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:55.300 18:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:55.300 18:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:55.300 18:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:55.300 18:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:55.559 18:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:55.559 18:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:55.559 18:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:55.559 18:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:55.559 18:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:55.559 18:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:55.559 18:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:55.559 18:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:55.559 18:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:55.559 18:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:08:55.559 18:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:55.559 18:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:08:55.559 18:26:55 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:55.559 18:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:55.559 18:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:55.559 18:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:08:55.559 18:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:08:55.559 18:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:08:55.818 malloc_lvol_verify 00:08:55.818 18:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:08:56.076 3c318319-44d6-4afd-aefa-d33af47b3394 00:08:56.076 18:26:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:08:56.076 0f0f7d44-c6ef-4620-99df-08580b995b13 00:08:56.076 18:26:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:08:56.335 /dev/nbd0 00:08:56.335 18:26:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:08:56.335 mke2fs 1.46.5 (30-Dec-2021) 00:08:56.335 Discarding device blocks: 0/4096 done 00:08:56.335 Creating filesystem with 4096 1k blocks and 1024 inodes 00:08:56.335 00:08:56.335 Allocating group tables: 0/1 done 00:08:56.335 Writing inode tables: 0/1 done 00:08:56.335 Creating journal (1024 blocks): done 00:08:56.335 Writing superblocks and filesystem accounting information: 0/1 done 00:08:56.335 00:08:56.335 18:26:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:08:56.335 18:26:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:56.335 18:26:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:56.335 18:26:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:56.335 18:26:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:56.335 18:26:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:56.335 18:26:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:56.335 18:26:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:56.595 18:26:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:56.595 18:26:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:56.595 18:26:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:56.595 18:26:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:56.595 18:26:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:56.595 18:26:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:56.595 18:26:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:56.595 18:26:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:56.595 18:26:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:08:56.595 18:26:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:08:56.595 18:26:56 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 77780 00:08:56.595 18:26:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@946 -- # '[' -z 77780 ']' 00:08:56.595 18:26:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@950 -- # kill -0 77780 00:08:56.595 18:26:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@951 -- # uname 00:08:56.595 18:26:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:56.595 18:26:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 77780 00:08:56.595 18:26:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:56.595 18:26:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:56.595 18:26:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # echo 'killing process with pid 77780' 00:08:56.595 killing process with pid 77780 00:08:56.595 18:26:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@965 -- # kill 77780 00:08:56.595 18:26:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@970 -- # wait 77780 00:08:57.165 ************************************ 00:08:57.165 END TEST bdev_nbd 00:08:57.165 ************************************ 00:08:57.165 18:26:56 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:08:57.165 00:08:57.165 real 0m9.397s 00:08:57.165 user 0m13.072s 00:08:57.165 sys 0m3.580s 00:08:57.165 18:26:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:57.165 18:26:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:57.165 skipping fio tests on NVMe due to multi-ns failures. 00:08:57.165 18:26:56 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:08:57.165 18:26:56 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:08:57.165 18:26:56 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:08:57.165 18:26:56 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:57.165 18:26:56 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:57.165 18:26:56 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:08:57.165 18:26:56 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:57.165 18:26:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:57.165 ************************************ 00:08:57.165 START TEST bdev_verify 00:08:57.165 ************************************ 00:08:57.165 18:26:56 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:57.165 [2024-07-23 18:26:57.051150] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:08:57.165 [2024-07-23 18:26:57.051408] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78148 ] 00:08:57.165 [2024-07-23 18:26:57.200154] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:57.424 [2024-07-23 18:26:57.281943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.424 [2024-07-23 18:26:57.282046] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:57.992 Running I/O for 5 seconds... 00:09:03.278 00:09:03.278 Latency(us) 00:09:03.278 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:03.278 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:03.278 Verification LBA range: start 0x0 length 0xbd0bd 00:09:03.278 Nvme0n1 : 5.10 1481.72 5.79 0.00 0.00 86196.17 19803.89 81505.03 00:09:03.278 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:03.278 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:09:03.278 Nvme0n1 : 5.10 1405.64 5.49 0.00 0.00 90842.57 19002.58 79215.57 00:09:03.278 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:03.278 Verification LBA range: start 0x0 length 0xa0000 00:09:03.278 Nvme1n1 : 5.10 1481.12 5.79 0.00 0.00 86114.96 19231.52 78299.78 00:09:03.278 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:03.279 Verification LBA range: start 0xa0000 length 0xa0000 00:09:03.279 Nvme1n1 : 5.10 1405.08 5.49 0.00 0.00 90620.84 24840.72 76468.21 00:09:03.279 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:03.279 Verification LBA range: start 0x0 length 0x80000 00:09:03.279 Nvme2n1 : 5.10 1480.60 5.78 0.00 0.00 86051.28 18430.21 80589.25 00:09:03.279 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:03.279 Verification LBA range: start 0x80000 length 0x80000 00:09:03.279 Nvme2n1 : 5.10 1404.50 5.49 0.00 0.00 90501.77 23695.99 76010.31 00:09:03.279 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:03.279 Verification LBA range: start 0x0 length 0x80000 00:09:03.279 Nvme2n2 : 5.10 1480.00 5.78 0.00 0.00 85943.62 17857.84 82420.82 00:09:03.279 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:03.279 Verification LBA range: start 0x80000 length 0x80000 00:09:03.279 Nvme2n2 : 5.11 1403.87 5.48 0.00 0.00 90377.60 17285.48 78757.67 00:09:03.279 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:03.279 Verification LBA range: start 0x0 length 0x80000 00:09:03.279 Nvme2n3 : 5.10 1479.36 5.78 0.00 0.00 85845.55 18659.16 82878.71 00:09:03.279 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:03.279 Verification LBA range: start 0x80000 length 0x80000 00:09:03.279 Nvme2n3 : 5.11 1403.15 5.48 0.00 0.00 90281.92 14309.17 79673.46 00:09:03.279 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:03.279 Verification LBA range: start 0x0 length 0x20000 00:09:03.279 Nvme3n1 : 5.11 1478.75 5.78 0.00 0.00 85748.55 14652.59 83336.61 00:09:03.279 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:03.279 Verification LBA range: start 0x20000 length 0x20000 00:09:03.279 Nvme3n1 : 5.11 1402.75 5.48 0.00 0.00 90206.77 13565.09 79215.57 00:09:03.279 =================================================================================================================== 00:09:03.279 Total : 17306.53 67.60 0.00 0.00 88169.09 13565.09 83336.61 00:09:03.843 00:09:03.843 real 0m6.682s 00:09:03.843 user 0m12.365s 00:09:03.843 sys 0m0.347s 00:09:03.843 18:27:03 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:03.843 18:27:03 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:09:03.843 ************************************ 00:09:03.843 END TEST bdev_verify 00:09:03.843 ************************************ 00:09:03.843 18:27:03 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:03.843 18:27:03 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:09:03.843 18:27:03 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:03.843 18:27:03 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:03.843 ************************************ 00:09:03.843 START TEST bdev_verify_big_io 00:09:03.843 ************************************ 00:09:03.843 18:27:03 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:03.843 [2024-07-23 18:27:03.808856] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:09:03.843 [2024-07-23 18:27:03.809103] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78235 ] 00:09:04.101 [2024-07-23 18:27:03.960634] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:04.101 [2024-07-23 18:27:04.041904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.101 [2024-07-23 18:27:04.042004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:04.669 Running I/O for 5 seconds... 00:09:11.240 00:09:11.240 Latency(us) 00:09:11.240 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:11.240 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:11.240 Verification LBA range: start 0x0 length 0xbd0b 00:09:11.240 Nvme0n1 : 5.43 188.53 11.78 0.00 0.00 663331.77 21406.52 655703.42 00:09:11.240 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:11.240 Verification LBA range: start 0xbd0b length 0xbd0b 00:09:11.240 Nvme0n1 : 5.62 93.12 5.82 0.00 0.00 1299985.33 16713.11 1391996.09 00:09:11.240 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:11.240 Verification LBA range: start 0x0 length 0xa000 00:09:11.240 Nvme1n1 : 5.43 188.41 11.78 0.00 0.00 649713.02 66852.44 666692.86 00:09:11.240 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:11.240 Verification LBA range: start 0xa000 length 0xa000 00:09:11.241 Nvme1n1 : 5.66 98.13 6.13 0.00 0.00 1197030.01 36631.48 1560500.88 00:09:11.241 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:11.241 Verification LBA range: start 0x0 length 0x8000 00:09:11.241 Nvme2n1 : 5.56 186.72 11.67 0.00 0.00 634656.08 55176.16 663029.72 00:09:11.241 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:11.241 Verification LBA range: start 0x8000 length 0x8000 00:09:11.241 Nvme2n1 : 5.78 110.50 6.91 0.00 0.00 1014853.29 38005.16 1142902.05 00:09:11.241 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:11.241 Verification LBA range: start 0x0 length 0x8000 00:09:11.241 Nvme2n2 : 5.60 201.75 12.61 0.00 0.00 589867.43 8471.03 666692.86 00:09:11.241 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:11.241 Verification LBA range: start 0x8000 length 0x8000 00:09:11.241 Nvme2n2 : 5.85 117.70 7.36 0.00 0.00 921966.55 24039.41 2388372.23 00:09:11.241 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:11.241 Verification LBA range: start 0x0 length 0x8000 00:09:11.241 Nvme2n3 : 5.60 202.58 12.66 0.00 0.00 575639.26 8928.92 666692.86 00:09:11.241 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:11.241 Verification LBA range: start 0x8000 length 0x8000 00:09:11.241 Nvme2n3 : 6.02 161.25 10.08 0.00 0.00 646567.54 12477.60 2432330.01 00:09:11.241 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:11.241 Verification LBA range: start 0x0 length 0x2000 00:09:11.241 Nvme3n1 : 5.61 205.52 12.84 0.00 0.00 556231.72 5351.63 666692.86 00:09:11.241 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:11.241 Verification LBA range: start 0x2000 length 0x2000 00:09:11.241 Nvme3n1 : 6.25 274.39 17.15 0.00 0.00 368105.98 468.63 2461635.19 00:09:11.241 =================================================================================================================== 00:09:11.241 Total : 2028.61 126.79 0.00 0.00 677133.78 468.63 2461635.19 00:09:12.177 00:09:12.177 real 0m8.180s 00:09:12.177 user 0m15.280s 00:09:12.177 sys 0m0.383s 00:09:12.177 18:27:11 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:12.177 18:27:11 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:09:12.177 ************************************ 00:09:12.177 END TEST bdev_verify_big_io 00:09:12.177 ************************************ 00:09:12.177 18:27:11 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:12.177 18:27:11 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:09:12.177 18:27:11 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:12.177 18:27:11 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:12.177 ************************************ 00:09:12.177 START TEST bdev_write_zeroes 00:09:12.177 ************************************ 00:09:12.177 18:27:11 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:12.177 [2024-07-23 18:27:12.050801] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:09:12.177 [2024-07-23 18:27:12.051042] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78353 ] 00:09:12.177 [2024-07-23 18:27:12.199782] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.436 [2024-07-23 18:27:12.280689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.696 Running I/O for 1 seconds... 00:09:14.070 00:09:14.070 Latency(us) 00:09:14.070 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:14.070 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:14.070 Nvme0n1 : 1.01 9720.29 37.97 0.00 0.00 13129.92 8871.69 28274.92 00:09:14.070 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:14.070 Nvme1n1 : 1.02 9708.56 37.92 0.00 0.00 13124.62 9329.58 28847.29 00:09:14.070 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:14.070 Nvme2n1 : 1.02 9697.99 37.88 0.00 0.00 13082.05 9215.11 28618.34 00:09:14.070 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:14.070 Nvme2n2 : 1.02 9728.38 38.00 0.00 0.00 12990.74 7841.43 26672.29 00:09:14.070 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:14.070 Nvme2n3 : 1.02 9757.92 38.12 0.00 0.00 12921.38 4636.17 26672.29 00:09:14.070 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:14.070 Nvme3n1 : 1.02 9749.16 38.08 0.00 0.00 12900.25 4922.35 27359.13 00:09:14.070 =================================================================================================================== 00:09:14.070 Total : 58362.29 227.98 0.00 0.00 13024.30 4636.17 28847.29 00:09:14.330 00:09:14.330 real 0m2.165s 00:09:14.330 user 0m1.766s 00:09:14.330 sys 0m0.285s 00:09:14.330 18:27:14 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:14.330 18:27:14 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:09:14.330 ************************************ 00:09:14.330 END TEST bdev_write_zeroes 00:09:14.330 ************************************ 00:09:14.330 18:27:14 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:14.330 18:27:14 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:09:14.330 18:27:14 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:14.330 18:27:14 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:14.330 ************************************ 00:09:14.330 START TEST bdev_json_nonenclosed 00:09:14.330 ************************************ 00:09:14.330 18:27:14 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:14.330 [2024-07-23 18:27:14.270242] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:09:14.330 [2024-07-23 18:27:14.270491] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78396 ] 00:09:14.590 [2024-07-23 18:27:14.418415] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.590 [2024-07-23 18:27:14.499198] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.590 [2024-07-23 18:27:14.499315] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:09:14.590 [2024-07-23 18:27:14.499359] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:14.590 [2024-07-23 18:27:14.499371] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:14.849 ************************************ 00:09:14.849 END TEST bdev_json_nonenclosed 00:09:14.849 ************************************ 00:09:14.849 00:09:14.849 real 0m0.475s 00:09:14.849 user 0m0.245s 00:09:14.849 sys 0m0.126s 00:09:14.849 18:27:14 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:14.849 18:27:14 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:09:14.849 18:27:14 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:14.849 18:27:14 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:09:14.849 18:27:14 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:14.849 18:27:14 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:14.849 ************************************ 00:09:14.849 START TEST bdev_json_nonarray 00:09:14.849 ************************************ 00:09:14.849 18:27:14 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:14.849 [2024-07-23 18:27:14.803977] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:09:14.849 [2024-07-23 18:27:14.804251] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78416 ] 00:09:15.109 [2024-07-23 18:27:14.942690] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.109 [2024-07-23 18:27:15.023246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.109 [2024-07-23 18:27:15.023391] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:09:15.109 [2024-07-23 18:27:15.023429] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:15.109 [2024-07-23 18:27:15.023450] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:15.368 ************************************ 00:09:15.368 END TEST bdev_json_nonarray 00:09:15.368 ************************************ 00:09:15.368 00:09:15.368 real 0m0.465s 00:09:15.368 user 0m0.232s 00:09:15.368 sys 0m0.129s 00:09:15.368 18:27:15 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:15.368 18:27:15 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:09:15.368 18:27:15 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:09:15.368 18:27:15 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:09:15.368 18:27:15 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:09:15.368 18:27:15 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:09:15.368 18:27:15 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:09:15.368 18:27:15 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:09:15.368 18:27:15 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:15.368 18:27:15 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:09:15.368 18:27:15 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:09:15.368 18:27:15 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:09:15.368 18:27:15 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:09:15.368 ************************************ 00:09:15.368 END TEST blockdev_nvme 00:09:15.368 ************************************ 00:09:15.368 00:09:15.368 real 0m32.845s 00:09:15.368 user 0m49.769s 00:09:15.368 sys 0m6.516s 00:09:15.368 18:27:15 blockdev_nvme -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:15.368 18:27:15 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:15.368 18:27:15 -- spdk/autotest.sh@213 -- # uname -s 00:09:15.368 18:27:15 -- spdk/autotest.sh@213 -- # [[ Linux == Linux ]] 00:09:15.368 18:27:15 -- spdk/autotest.sh@214 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:09:15.368 18:27:15 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:15.368 18:27:15 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:15.368 18:27:15 -- common/autotest_common.sh@10 -- # set +x 00:09:15.368 ************************************ 00:09:15.368 START TEST blockdev_nvme_gpt 00:09:15.368 ************************************ 00:09:15.368 18:27:15 blockdev_nvme_gpt -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:09:15.368 * Looking for test storage... 00:09:15.368 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:09:15.368 18:27:15 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:09:15.368 18:27:15 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:09:15.368 18:27:15 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:09:15.368 18:27:15 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:15.368 18:27:15 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:09:15.368 18:27:15 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:09:15.368 18:27:15 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:09:15.368 18:27:15 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:09:15.368 18:27:15 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:09:15.368 18:27:15 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:09:15.368 18:27:15 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:09:15.368 18:27:15 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:09:15.629 18:27:15 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:09:15.629 18:27:15 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:09:15.629 18:27:15 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:09:15.629 18:27:15 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:09:15.629 18:27:15 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:09:15.629 18:27:15 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:09:15.629 18:27:15 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:09:15.629 18:27:15 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:09:15.629 18:27:15 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:09:15.629 18:27:15 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:09:15.629 18:27:15 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:09:15.629 18:27:15 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:09:15.629 18:27:15 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=78492 00:09:15.629 18:27:15 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:09:15.629 18:27:15 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:15.629 18:27:15 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 78492 00:09:15.629 18:27:15 blockdev_nvme_gpt -- common/autotest_common.sh@827 -- # '[' -z 78492 ']' 00:09:15.629 18:27:15 blockdev_nvme_gpt -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:15.629 18:27:15 blockdev_nvme_gpt -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:15.629 18:27:15 blockdev_nvme_gpt -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:15.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:15.629 18:27:15 blockdev_nvme_gpt -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:15.629 18:27:15 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:15.629 [2024-07-23 18:27:15.525411] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:09:15.629 [2024-07-23 18:27:15.525639] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78492 ] 00:09:15.629 [2024-07-23 18:27:15.676152] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.888 [2024-07-23 18:27:15.757515] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.458 18:27:16 blockdev_nvme_gpt -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:16.458 18:27:16 blockdev_nvme_gpt -- common/autotest_common.sh@860 -- # return 0 00:09:16.458 18:27:16 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:09:16.458 18:27:16 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:09:16.458 18:27:16 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:17.028 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:17.028 Waiting for block devices as requested 00:09:17.028 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:17.287 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:17.287 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:17.287 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:22.560 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:22.560 18:27:22 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:09:22.560 18:27:22 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:09:22.560 18:27:22 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:09:22.560 18:27:22 blockdev_nvme_gpt -- common/autotest_common.sh@1666 -- # local nvme bdf 00:09:22.560 18:27:22 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:09:22.560 18:27:22 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:09:22.560 18:27:22 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:09:22.560 18:27:22 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:09:22.560 18:27:22 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:09:22.560 18:27:22 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:09:22.560 18:27:22 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:09:22.560 18:27:22 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:09:22.560 18:27:22 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:09:22.560 18:27:22 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:09:22.560 18:27:22 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:09:22.560 18:27:22 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n1 00:09:22.560 18:27:22 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local device=nvme2n1 00:09:22.560 18:27:22 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:09:22.560 18:27:22 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:09:22.560 18:27:22 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:09:22.560 18:27:22 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n2 00:09:22.560 18:27:22 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local device=nvme2n2 00:09:22.560 18:27:22 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:09:22.560 18:27:22 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:09:22.560 18:27:22 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:09:22.560 18:27:22 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n3 00:09:22.560 18:27:22 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local device=nvme2n3 00:09:22.560 18:27:22 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:09:22.560 18:27:22 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:09:22.560 18:27:22 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:09:22.560 18:27:22 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # is_block_zoned nvme3c3n1 00:09:22.560 18:27:22 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local device=nvme3c3n1 00:09:22.560 18:27:22 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:09:22.560 18:27:22 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:09:22.560 18:27:22 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:09:22.560 18:27:22 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # is_block_zoned nvme3n1 00:09:22.560 18:27:22 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local device=nvme3n1 00:09:22.560 18:27:22 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:09:22.561 18:27:22 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:09:22.561 18:27:22 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:09:22.561 18:27:22 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:09:22.561 18:27:22 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:09:22.561 18:27:22 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:09:22.561 18:27:22 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:09:22.561 18:27:22 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:09:22.561 18:27:22 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:09:22.561 18:27:22 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:09:22.561 BYT; 00:09:22.561 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:09:22.561 18:27:22 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:09:22.561 BYT; 00:09:22.561 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:09:22.561 18:27:22 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:09:22.561 18:27:22 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:09:22.561 18:27:22 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:09:22.561 18:27:22 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:09:22.561 18:27:22 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:09:22.561 18:27:22 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:09:22.561 18:27:22 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:09:22.561 18:27:22 blockdev_nvme_gpt -- scripts/common.sh@408 -- # local spdk_guid 00:09:22.561 18:27:22 blockdev_nvme_gpt -- scripts/common.sh@410 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:09:22.561 18:27:22 blockdev_nvme_gpt -- scripts/common.sh@412 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:22.561 18:27:22 blockdev_nvme_gpt -- scripts/common.sh@413 -- # IFS='()' 00:09:22.561 18:27:22 blockdev_nvme_gpt -- scripts/common.sh@413 -- # read -r _ spdk_guid _ 00:09:22.561 18:27:22 blockdev_nvme_gpt -- scripts/common.sh@413 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:22.561 18:27:22 blockdev_nvme_gpt -- scripts/common.sh@414 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:09:22.561 18:27:22 blockdev_nvme_gpt -- scripts/common.sh@414 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:09:22.561 18:27:22 blockdev_nvme_gpt -- scripts/common.sh@416 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:09:22.561 18:27:22 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:09:22.561 18:27:22 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:09:22.561 18:27:22 blockdev_nvme_gpt -- scripts/common.sh@420 -- # local spdk_guid 00:09:22.561 18:27:22 blockdev_nvme_gpt -- scripts/common.sh@422 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:09:22.561 18:27:22 blockdev_nvme_gpt -- scripts/common.sh@424 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:22.561 18:27:22 blockdev_nvme_gpt -- scripts/common.sh@425 -- # IFS='()' 00:09:22.561 18:27:22 blockdev_nvme_gpt -- scripts/common.sh@425 -- # read -r _ spdk_guid _ 00:09:22.561 18:27:22 blockdev_nvme_gpt -- scripts/common.sh@425 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:22.561 18:27:22 blockdev_nvme_gpt -- scripts/common.sh@426 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:09:22.561 18:27:22 blockdev_nvme_gpt -- scripts/common.sh@426 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:09:22.561 18:27:22 blockdev_nvme_gpt -- scripts/common.sh@428 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:09:22.561 18:27:22 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:09:22.561 18:27:22 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:09:23.933 The operation has completed successfully. 00:09:23.933 18:27:23 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:09:24.867 The operation has completed successfully. 00:09:24.867 18:27:24 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:25.434 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:26.001 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:26.001 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:26.001 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:26.001 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:26.260 18:27:26 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:09:26.260 18:27:26 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:26.260 18:27:26 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:26.260 [] 00:09:26.260 18:27:26 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:26.260 18:27:26 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:09:26.260 18:27:26 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:09:26.260 18:27:26 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:09:26.260 18:27:26 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:26.260 18:27:26 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:09:26.260 18:27:26 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:26.260 18:27:26 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:26.520 18:27:26 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:26.520 18:27:26 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:09:26.520 18:27:26 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:26.520 18:27:26 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:26.520 18:27:26 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:26.520 18:27:26 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:09:26.520 18:27:26 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:09:26.520 18:27:26 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:26.520 18:27:26 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:26.520 18:27:26 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:26.520 18:27:26 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:09:26.520 18:27:26 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:26.520 18:27:26 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:26.520 18:27:26 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:26.521 18:27:26 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:09:26.521 18:27:26 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:26.521 18:27:26 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:26.521 18:27:26 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:26.521 18:27:26 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:09:26.521 18:27:26 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:09:26.521 18:27:26 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:26.521 18:27:26 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:09:26.521 18:27:26 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:26.786 18:27:26 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:26.786 18:27:26 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:09:26.786 18:27:26 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:09:26.787 18:27:26 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "ff25b97a-9f67-460d-9a0c-174d75d1a2f9"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "ff25b97a-9f67-460d-9a0c-174d75d1a2f9",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "30c70783-a98b-4e74-a5bc-5bc0e84d47cb"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "30c70783-a98b-4e74-a5bc-5bc0e84d47cb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "f2c046e4-ec6e-492e-b5e0-be1c454d7c14"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f2c046e4-ec6e-492e-b5e0-be1c454d7c14",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "e954300e-6698-481f-9e43-1456544ad1b3"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e954300e-6698-481f-9e43-1456544ad1b3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "dde5fd08-e50c-4bee-b12c-5267a1fc5dce"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "dde5fd08-e50c-4bee-b12c-5267a1fc5dce",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:09:26.787 18:27:26 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:09:26.787 18:27:26 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:09:26.787 18:27:26 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:09:26.787 18:27:26 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 78492 00:09:26.787 18:27:26 blockdev_nvme_gpt -- common/autotest_common.sh@946 -- # '[' -z 78492 ']' 00:09:26.787 18:27:26 blockdev_nvme_gpt -- common/autotest_common.sh@950 -- # kill -0 78492 00:09:26.787 18:27:26 blockdev_nvme_gpt -- common/autotest_common.sh@951 -- # uname 00:09:26.787 18:27:26 blockdev_nvme_gpt -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:26.787 18:27:26 blockdev_nvme_gpt -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 78492 00:09:26.787 killing process with pid 78492 00:09:26.787 18:27:26 blockdev_nvme_gpt -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:26.787 18:27:26 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:26.787 18:27:26 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # echo 'killing process with pid 78492' 00:09:26.787 18:27:26 blockdev_nvme_gpt -- common/autotest_common.sh@965 -- # kill 78492 00:09:26.787 18:27:26 blockdev_nvme_gpt -- common/autotest_common.sh@970 -- # wait 78492 00:09:27.361 18:27:27 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:27.361 18:27:27 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:09:27.361 18:27:27 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:09:27.361 18:27:27 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:27.361 18:27:27 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:27.361 ************************************ 00:09:27.361 START TEST bdev_hello_world 00:09:27.361 ************************************ 00:09:27.361 18:27:27 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:09:27.620 [2024-07-23 18:27:27.419583] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:09:27.620 [2024-07-23 18:27:27.419848] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79107 ] 00:09:27.620 [2024-07-23 18:27:27.574493] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.620 [2024-07-23 18:27:27.653860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.188 [2024-07-23 18:27:28.090343] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:09:28.189 [2024-07-23 18:27:28.090412] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:09:28.189 [2024-07-23 18:27:28.090447] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:09:28.189 [2024-07-23 18:27:28.092936] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:09:28.189 [2024-07-23 18:27:28.093311] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:09:28.189 [2024-07-23 18:27:28.093337] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:09:28.189 [2024-07-23 18:27:28.093566] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:09:28.189 00:09:28.189 [2024-07-23 18:27:28.093625] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:09:28.446 00:09:28.446 real 0m1.133s 00:09:28.446 user 0m0.734s 00:09:28.446 sys 0m0.292s 00:09:28.446 18:27:28 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:28.447 18:27:28 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:09:28.447 ************************************ 00:09:28.447 END TEST bdev_hello_world 00:09:28.447 ************************************ 00:09:28.705 18:27:28 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:09:28.705 18:27:28 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:28.705 18:27:28 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:28.705 18:27:28 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:28.705 ************************************ 00:09:28.705 START TEST bdev_bounds 00:09:28.705 ************************************ 00:09:28.705 18:27:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1121 -- # bdev_bounds '' 00:09:28.705 18:27:28 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=79142 00:09:28.705 18:27:28 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:09:28.705 18:27:28 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:09:28.705 Process bdevio pid: 79142 00:09:28.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:28.705 18:27:28 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 79142' 00:09:28.705 18:27:28 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 79142 00:09:28.705 18:27:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@827 -- # '[' -z 79142 ']' 00:09:28.705 18:27:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:28.705 18:27:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:28.705 18:27:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:28.705 18:27:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:28.705 18:27:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:09:28.705 [2024-07-23 18:27:28.629710] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:09:28.705 [2024-07-23 18:27:28.629872] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79142 ] 00:09:28.963 [2024-07-23 18:27:28.784152] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:28.963 [2024-07-23 18:27:28.865175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:28.963 [2024-07-23 18:27:28.865292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.963 [2024-07-23 18:27:28.865420] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:29.530 18:27:29 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:29.530 18:27:29 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@860 -- # return 0 00:09:29.530 18:27:29 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:09:29.530 I/O targets: 00:09:29.530 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:09:29.530 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:09:29.530 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:09:29.530 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:29.530 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:29.530 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:29.530 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:09:29.530 00:09:29.530 00:09:29.530 CUnit - A unit testing framework for C - Version 2.1-3 00:09:29.530 http://cunit.sourceforge.net/ 00:09:29.530 00:09:29.530 00:09:29.530 Suite: bdevio tests on: Nvme3n1 00:09:29.530 Test: blockdev write read block ...passed 00:09:29.530 Test: blockdev write zeroes read block ...passed 00:09:29.530 Test: blockdev write zeroes read no split ...passed 00:09:29.530 Test: blockdev write zeroes read split ...passed 00:09:29.530 Test: blockdev write zeroes read split partial ...passed 00:09:29.530 Test: blockdev reset ...[2024-07-23 18:27:29.528236] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:09:29.530 [2024-07-23 18:27:29.530808] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:29.530 passed 00:09:29.530 Test: blockdev write read 8 blocks ...passed 00:09:29.530 Test: blockdev write read size > 128k ...passed 00:09:29.530 Test: blockdev write read invalid size ...passed 00:09:29.530 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:29.530 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:29.530 Test: blockdev write read max offset ...passed 00:09:29.530 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:29.530 Test: blockdev writev readv 8 blocks ...passed 00:09:29.530 Test: blockdev writev readv 30 x 1block ...passed 00:09:29.530 Test: blockdev writev readv block ...passed 00:09:29.530 Test: blockdev writev readv size > 128k ...passed 00:09:29.530 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:29.530 Test: blockdev comparev and writev ...[2024-07-23 18:27:29.538098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ba00e000 len:0x1000 00:09:29.530 [2024-07-23 18:27:29.538257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:29.530 passed 00:09:29.530 Test: blockdev nvme passthru rw ...passed 00:09:29.530 Test: blockdev nvme passthru vendor specific ...[2024-07-23 18:27:29.539262] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:29.530 [2024-07-23 18:27:29.539391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:29.530 passed 00:09:29.530 Test: blockdev nvme admin passthru ...passed 00:09:29.530 Test: blockdev copy ...passed 00:09:29.530 Suite: bdevio tests on: Nvme2n3 00:09:29.530 Test: blockdev write read block ...passed 00:09:29.530 Test: blockdev write zeroes read block ...passed 00:09:29.530 Test: blockdev write zeroes read no split ...passed 00:09:29.530 Test: blockdev write zeroes read split ...passed 00:09:29.530 Test: blockdev write zeroes read split partial ...passed 00:09:29.530 Test: blockdev reset ...[2024-07-23 18:27:29.568173] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:09:29.530 [2024-07-23 18:27:29.570870] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:29.530 passed 00:09:29.530 Test: blockdev write read 8 blocks ...passed 00:09:29.530 Test: blockdev write read size > 128k ...passed 00:09:29.530 Test: blockdev write read invalid size ...passed 00:09:29.530 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:29.530 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:29.530 Test: blockdev write read max offset ...passed 00:09:29.530 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:29.530 Test: blockdev writev readv 8 blocks ...passed 00:09:29.530 Test: blockdev writev readv 30 x 1block ...passed 00:09:29.530 Test: blockdev writev readv block ...passed 00:09:29.530 Test: blockdev writev readv size > 128k ...passed 00:09:29.530 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:29.530 Test: blockdev comparev and writev ...[2024-07-23 18:27:29.577742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ba00a000 len:0x1000 00:09:29.530 [2024-07-23 18:27:29.577809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:29.530 passed 00:09:29.530 Test: blockdev nvme passthru rw ...passed 00:09:29.530 Test: blockdev nvme passthru vendor specific ...passed 00:09:29.530 Test: blockdev nvme admin passthru ...[2024-07-23 18:27:29.578639] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:29.530 [2024-07-23 18:27:29.578677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:29.789 passed 00:09:29.789 Test: blockdev copy ...passed 00:09:29.789 Suite: bdevio tests on: Nvme2n2 00:09:29.789 Test: blockdev write read block ...passed 00:09:29.789 Test: blockdev write zeroes read block ...passed 00:09:29.789 Test: blockdev write zeroes read no split ...passed 00:09:29.789 Test: blockdev write zeroes read split ...passed 00:09:29.789 Test: blockdev write zeroes read split partial ...passed 00:09:29.789 Test: blockdev reset ...[2024-07-23 18:27:29.609973] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:09:29.789 [2024-07-23 18:27:29.612407] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:29.789 passed 00:09:29.789 Test: blockdev write read 8 blocks ...passed 00:09:29.789 Test: blockdev write read size > 128k ...passed 00:09:29.789 Test: blockdev write read invalid size ...passed 00:09:29.789 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:29.789 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:29.789 Test: blockdev write read max offset ...passed 00:09:29.789 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:29.790 Test: blockdev writev readv 8 blocks ...passed 00:09:29.790 Test: blockdev writev readv 30 x 1block ...passed 00:09:29.790 Test: blockdev writev readv block ...passed 00:09:29.790 Test: blockdev writev readv size > 128k ...passed 00:09:29.790 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:29.790 Test: blockdev comparev and writev ...[2024-07-23 18:27:29.620885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2aa606000 len:0x1000 00:09:29.790 [2024-07-23 18:27:29.621022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:29.790 passed 00:09:29.790 Test: blockdev nvme passthru rw ...passed 00:09:29.790 Test: blockdev nvme passthru vendor specific ...[2024-07-23 18:27:29.622197] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:29.790 [2024-07-23 18:27:29.622291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:29.790 passed 00:09:29.790 Test: blockdev nvme admin passthru ...passed 00:09:29.790 Test: blockdev copy ...passed 00:09:29.790 Suite: bdevio tests on: Nvme2n1 00:09:29.790 Test: blockdev write read block ...passed 00:09:29.790 Test: blockdev write zeroes read block ...passed 00:09:29.790 Test: blockdev write zeroes read no split ...passed 00:09:29.790 Test: blockdev write zeroes read split ...passed 00:09:29.790 Test: blockdev write zeroes read split partial ...passed 00:09:29.790 Test: blockdev reset ...[2024-07-23 18:27:29.652037] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:09:29.790 passed 00:09:29.790 Test: blockdev write read 8 blocks ...[2024-07-23 18:27:29.654527] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:29.790 passed 00:09:29.790 Test: blockdev write read size > 128k ...passed 00:09:29.790 Test: blockdev write read invalid size ...passed 00:09:29.790 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:29.790 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:29.790 Test: blockdev write read max offset ...passed 00:09:29.790 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:29.790 Test: blockdev writev readv 8 blocks ...passed 00:09:29.790 Test: blockdev writev readv 30 x 1block ...passed 00:09:29.790 Test: blockdev writev readv block ...passed 00:09:29.790 Test: blockdev writev readv size > 128k ...passed 00:09:29.790 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:29.790 Test: blockdev comparev and writev ...[2024-07-23 18:27:29.661189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2aa602000 len:0x1000 00:09:29.790 [2024-07-23 18:27:29.661240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:29.790 passed 00:09:29.790 Test: blockdev nvme passthru rw ...passed 00:09:29.790 Test: blockdev nvme passthru vendor specific ...passed 00:09:29.790 Test: blockdev nvme admin passthru ...[2024-07-23 18:27:29.662087] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:29.790 [2024-07-23 18:27:29.662121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:29.790 passed 00:09:29.790 Test: blockdev copy ...passed 00:09:29.790 Suite: bdevio tests on: Nvme1n1p2 00:09:29.790 Test: blockdev write read block ...passed 00:09:29.790 Test: blockdev write zeroes read block ...passed 00:09:29.790 Test: blockdev write zeroes read no split ...passed 00:09:29.790 Test: blockdev write zeroes read split ...passed 00:09:29.790 Test: blockdev write zeroes read split partial ...passed 00:09:29.790 Test: blockdev reset ...[2024-07-23 18:27:29.682873] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:09:29.790 [2024-07-23 18:27:29.685049] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:29.790 passed 00:09:29.790 Test: blockdev write read 8 blocks ...passed 00:09:29.790 Test: blockdev write read size > 128k ...passed 00:09:29.790 Test: blockdev write read invalid size ...passed 00:09:29.790 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:29.790 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:29.790 Test: blockdev write read max offset ...passed 00:09:29.790 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:29.790 Test: blockdev writev readv 8 blocks ...passed 00:09:29.790 Test: blockdev writev readv 30 x 1block ...passed 00:09:29.790 Test: blockdev writev readv block ...passed 00:09:29.790 Test: blockdev writev readv size > 128k ...passed 00:09:29.790 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:29.790 Test: blockdev comparev and writev ...[2024-07-23 18:27:29.693450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2ba002000 len:0x1000 00:09:29.790 [2024-07-23 18:27:29.693558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:29.790 passed 00:09:29.790 Test: blockdev nvme passthru rw ...passed 00:09:29.790 Test: blockdev nvme passthru vendor specific ...passed 00:09:29.790 Test: blockdev nvme admin passthru ...passed 00:09:29.790 Test: blockdev copy ...passed 00:09:29.790 Suite: bdevio tests on: Nvme1n1p1 00:09:29.790 Test: blockdev write read block ...passed 00:09:29.790 Test: blockdev write zeroes read block ...passed 00:09:29.790 Test: blockdev write zeroes read no split ...passed 00:09:29.790 Test: blockdev write zeroes read split ...passed 00:09:29.790 Test: blockdev write zeroes read split partial ...passed 00:09:29.790 Test: blockdev reset ...[2024-07-23 18:27:29.715014] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:09:29.790 passed 00:09:29.790 Test: blockdev write read 8 blocks ...[2024-07-23 18:27:29.717106] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:29.790 passed 00:09:29.790 Test: blockdev write read size > 128k ...passed 00:09:29.790 Test: blockdev write read invalid size ...passed 00:09:29.790 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:29.790 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:29.790 Test: blockdev write read max offset ...passed 00:09:29.790 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:29.790 Test: blockdev writev readv 8 blocks ...passed 00:09:29.790 Test: blockdev writev readv 30 x 1block ...passed 00:09:29.790 Test: blockdev writev readv block ...passed 00:09:29.790 Test: blockdev writev readv size > 128k ...passed 00:09:29.790 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:29.790 Test: blockdev comparev and writev ...[2024-07-23 18:27:29.724397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2b723b000 len:0x1000 00:09:29.790 [2024-07-23 18:27:29.724450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:29.790 passed 00:09:29.790 Test: blockdev nvme passthru rw ...passed 00:09:29.790 Test: blockdev nvme passthru vendor specific ...passed 00:09:29.790 Test: blockdev nvme admin passthru ...passed 00:09:29.790 Test: blockdev copy ...passed 00:09:29.790 Suite: bdevio tests on: Nvme0n1 00:09:29.790 Test: blockdev write read block ...passed 00:09:29.790 Test: blockdev write zeroes read block ...passed 00:09:29.790 Test: blockdev write zeroes read no split ...passed 00:09:29.790 Test: blockdev write zeroes read split ...passed 00:09:29.790 Test: blockdev write zeroes read split partial ...passed 00:09:29.790 Test: blockdev reset ...[2024-07-23 18:27:29.745191] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:09:29.790 passed 00:09:29.790 Test: blockdev write read 8 blocks ...[2024-07-23 18:27:29.747334] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:29.790 passed 00:09:29.790 Test: blockdev write read size > 128k ...passed 00:09:29.790 Test: blockdev write read invalid size ...passed 00:09:29.790 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:29.790 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:29.790 Test: blockdev write read max offset ...passed 00:09:29.790 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:29.790 Test: blockdev writev readv 8 blocks ...passed 00:09:29.790 Test: blockdev writev readv 30 x 1block ...passed 00:09:29.790 Test: blockdev writev readv block ...passed 00:09:29.790 Test: blockdev writev readv size > 128k ...passed 00:09:29.790 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:29.790 Test: blockdev comparev and writev ...passed 00:09:29.790 Test: blockdev nvme passthru rw ...[2024-07-23 18:27:29.753048] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:09:29.790 separate metadata which is not supported yet. 00:09:29.790 passed 00:09:29.790 Test: blockdev nvme passthru vendor specific ...passed 00:09:29.790 Test: blockdev nvme admin passthru ...[2024-07-23 18:27:29.753691] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:09:29.790 [2024-07-23 18:27:29.753742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:09:29.790 passed 00:09:29.790 Test: blockdev copy ...passed 00:09:29.790 00:09:29.790 Run Summary: Type Total Ran Passed Failed Inactive 00:09:29.790 suites 7 7 n/a 0 0 00:09:29.790 tests 161 161 161 0 0 00:09:29.790 asserts 1025 1025 1025 0 n/a 00:09:29.790 00:09:29.790 Elapsed time = 0.574 seconds 00:09:29.790 0 00:09:29.790 18:27:29 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 79142 00:09:29.790 18:27:29 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@946 -- # '[' -z 79142 ']' 00:09:29.790 18:27:29 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@950 -- # kill -0 79142 00:09:29.790 18:27:29 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@951 -- # uname 00:09:29.790 18:27:29 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:29.790 18:27:29 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 79142 00:09:29.790 18:27:29 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:29.790 18:27:29 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:29.790 18:27:29 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # echo 'killing process with pid 79142' 00:09:29.790 killing process with pid 79142 00:09:29.791 18:27:29 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@965 -- # kill 79142 00:09:29.791 18:27:29 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@970 -- # wait 79142 00:09:30.359 18:27:30 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:09:30.359 00:09:30.359 real 0m1.606s 00:09:30.359 user 0m3.644s 00:09:30.359 sys 0m0.456s 00:09:30.359 18:27:30 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:30.359 18:27:30 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:09:30.359 ************************************ 00:09:30.359 END TEST bdev_bounds 00:09:30.359 ************************************ 00:09:30.359 18:27:30 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:09:30.359 18:27:30 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:09:30.359 18:27:30 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:30.359 18:27:30 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:30.359 ************************************ 00:09:30.359 START TEST bdev_nbd 00:09:30.359 ************************************ 00:09:30.359 18:27:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1121 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:09:30.359 18:27:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:09:30.359 18:27:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:09:30.359 18:27:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:30.359 18:27:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:30.359 18:27:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:30.359 18:27:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:09:30.359 18:27:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:09:30.359 18:27:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:09:30.359 18:27:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:09:30.359 18:27:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:09:30.359 18:27:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:09:30.359 18:27:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:30.359 18:27:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:09:30.359 18:27:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:30.359 18:27:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:09:30.359 18:27:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=79190 00:09:30.359 18:27:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:09:30.359 18:27:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:09:30.359 18:27:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 79190 /var/tmp/spdk-nbd.sock 00:09:30.359 18:27:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@827 -- # '[' -z 79190 ']' 00:09:30.359 18:27:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:30.359 18:27:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:30.359 18:27:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:30.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:30.359 18:27:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:30.359 18:27:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:09:30.359 [2024-07-23 18:27:30.295995] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:09:30.359 [2024-07-23 18:27:30.296226] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:30.622 [2024-07-23 18:27:30.435778] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.622 [2024-07-23 18:27:30.515086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.189 18:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:31.189 18:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@860 -- # return 0 00:09:31.189 18:27:31 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:09:31.189 18:27:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:31.189 18:27:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:31.189 18:27:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:09:31.189 18:27:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:09:31.189 18:27:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:31.189 18:27:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:31.189 18:27:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:09:31.189 18:27:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:09:31.189 18:27:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:09:31.189 18:27:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:09:31.189 18:27:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:31.189 18:27:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:09:31.448 18:27:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:09:31.448 18:27:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:09:31.448 18:27:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:09:31.448 18:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:09:31.448 18:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:09:31.448 18:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:09:31.448 18:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:09:31.448 18:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:09:31.448 18:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:09:31.448 18:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:09:31.448 18:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:09:31.448 18:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:31.448 1+0 records in 00:09:31.448 1+0 records out 00:09:31.448 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000498831 s, 8.2 MB/s 00:09:31.448 18:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:31.448 18:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:09:31.448 18:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:31.448 18:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:09:31.448 18:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:09:31.448 18:27:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:31.448 18:27:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:31.448 18:27:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:09:31.707 18:27:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:09:31.707 18:27:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:09:31.707 18:27:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:09:31.707 18:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:09:31.707 18:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:09:31.707 18:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:09:31.707 18:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:09:31.707 18:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:09:31.707 18:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:09:31.707 18:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:09:31.707 18:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:09:31.707 18:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:31.707 1+0 records in 00:09:31.707 1+0 records out 00:09:31.707 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000369597 s, 11.1 MB/s 00:09:31.707 18:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:31.707 18:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:09:31.707 18:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:31.707 18:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:09:31.707 18:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:09:31.707 18:27:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:31.707 18:27:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:31.707 18:27:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:09:31.966 18:27:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:09:31.966 18:27:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:09:31.966 18:27:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:09:31.966 18:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd2 00:09:31.966 18:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:09:31.966 18:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:09:31.966 18:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:09:31.966 18:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd2 /proc/partitions 00:09:31.966 18:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:09:31.966 18:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:09:31.966 18:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:09:31.966 18:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:31.966 1+0 records in 00:09:31.966 1+0 records out 00:09:31.966 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000715746 s, 5.7 MB/s 00:09:31.966 18:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:31.966 18:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:09:31.966 18:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:31.966 18:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:09:31.966 18:27:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:09:31.966 18:27:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:31.966 18:27:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:31.966 18:27:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:09:32.226 18:27:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:09:32.226 18:27:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:09:32.226 18:27:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:09:32.226 18:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd3 00:09:32.226 18:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:09:32.226 18:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:09:32.226 18:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:09:32.226 18:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd3 /proc/partitions 00:09:32.226 18:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:09:32.226 18:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:09:32.226 18:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:09:32.226 18:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:32.226 1+0 records in 00:09:32.226 1+0 records out 00:09:32.226 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00039371 s, 10.4 MB/s 00:09:32.226 18:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:32.226 18:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:09:32.226 18:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:32.226 18:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:09:32.226 18:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:09:32.226 18:27:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:32.226 18:27:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:32.226 18:27:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:09:32.484 18:27:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:09:32.484 18:27:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:09:32.484 18:27:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:09:32.484 18:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd4 00:09:32.484 18:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:09:32.484 18:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:09:32.484 18:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:09:32.484 18:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd4 /proc/partitions 00:09:32.484 18:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:09:32.484 18:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:09:32.484 18:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:09:32.484 18:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:32.484 1+0 records in 00:09:32.484 1+0 records out 00:09:32.484 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000542313 s, 7.6 MB/s 00:09:32.484 18:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:32.484 18:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:09:32.484 18:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:32.484 18:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:09:32.484 18:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:09:32.484 18:27:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:32.484 18:27:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:32.484 18:27:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:09:32.742 18:27:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:09:32.742 18:27:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:09:32.742 18:27:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:09:32.742 18:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd5 00:09:32.742 18:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:09:32.742 18:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:09:32.742 18:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:09:32.742 18:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd5 /proc/partitions 00:09:32.742 18:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:09:32.742 18:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:09:32.742 18:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:09:32.742 18:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:32.742 1+0 records in 00:09:32.742 1+0 records out 00:09:32.742 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000636611 s, 6.4 MB/s 00:09:32.742 18:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:32.742 18:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:09:32.742 18:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:32.742 18:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:09:32.742 18:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:09:32.742 18:27:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:32.742 18:27:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:32.742 18:27:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:09:33.000 18:27:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:09:33.000 18:27:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:09:33.000 18:27:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:09:33.000 18:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd6 00:09:33.000 18:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:09:33.000 18:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:09:33.000 18:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:09:33.000 18:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd6 /proc/partitions 00:09:33.000 18:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:09:33.000 18:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:09:33.000 18:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:09:33.000 18:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:33.000 1+0 records in 00:09:33.000 1+0 records out 00:09:33.000 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00068189 s, 6.0 MB/s 00:09:33.000 18:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:33.000 18:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:09:33.000 18:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:33.000 18:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:09:33.000 18:27:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:09:33.000 18:27:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:33.000 18:27:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:33.000 18:27:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:33.258 18:27:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:09:33.258 { 00:09:33.258 "nbd_device": "/dev/nbd0", 00:09:33.258 "bdev_name": "Nvme0n1" 00:09:33.258 }, 00:09:33.258 { 00:09:33.258 "nbd_device": "/dev/nbd1", 00:09:33.258 "bdev_name": "Nvme1n1p1" 00:09:33.258 }, 00:09:33.258 { 00:09:33.258 "nbd_device": "/dev/nbd2", 00:09:33.258 "bdev_name": "Nvme1n1p2" 00:09:33.258 }, 00:09:33.258 { 00:09:33.258 "nbd_device": "/dev/nbd3", 00:09:33.258 "bdev_name": "Nvme2n1" 00:09:33.258 }, 00:09:33.258 { 00:09:33.258 "nbd_device": "/dev/nbd4", 00:09:33.258 "bdev_name": "Nvme2n2" 00:09:33.258 }, 00:09:33.258 { 00:09:33.258 "nbd_device": "/dev/nbd5", 00:09:33.258 "bdev_name": "Nvme2n3" 00:09:33.258 }, 00:09:33.258 { 00:09:33.258 "nbd_device": "/dev/nbd6", 00:09:33.258 "bdev_name": "Nvme3n1" 00:09:33.258 } 00:09:33.258 ]' 00:09:33.259 18:27:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:09:33.259 18:27:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:09:33.259 { 00:09:33.259 "nbd_device": "/dev/nbd0", 00:09:33.259 "bdev_name": "Nvme0n1" 00:09:33.259 }, 00:09:33.259 { 00:09:33.259 "nbd_device": "/dev/nbd1", 00:09:33.259 "bdev_name": "Nvme1n1p1" 00:09:33.259 }, 00:09:33.259 { 00:09:33.259 "nbd_device": "/dev/nbd2", 00:09:33.259 "bdev_name": "Nvme1n1p2" 00:09:33.259 }, 00:09:33.259 { 00:09:33.259 "nbd_device": "/dev/nbd3", 00:09:33.259 "bdev_name": "Nvme2n1" 00:09:33.259 }, 00:09:33.259 { 00:09:33.259 "nbd_device": "/dev/nbd4", 00:09:33.259 "bdev_name": "Nvme2n2" 00:09:33.259 }, 00:09:33.259 { 00:09:33.259 "nbd_device": "/dev/nbd5", 00:09:33.259 "bdev_name": "Nvme2n3" 00:09:33.259 }, 00:09:33.259 { 00:09:33.259 "nbd_device": "/dev/nbd6", 00:09:33.259 "bdev_name": "Nvme3n1" 00:09:33.259 } 00:09:33.259 ]' 00:09:33.259 18:27:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:09:33.259 18:27:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:09:33.259 18:27:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:33.259 18:27:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:09:33.259 18:27:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:33.259 18:27:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:33.259 18:27:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:33.259 18:27:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:33.523 18:27:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:33.523 18:27:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:33.523 18:27:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:33.523 18:27:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:33.523 18:27:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:33.523 18:27:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:33.523 18:27:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:33.523 18:27:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:33.523 18:27:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:33.523 18:27:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:33.523 18:27:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:33.523 18:27:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:33.524 18:27:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:33.524 18:27:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:33.524 18:27:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:33.524 18:27:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:33.524 18:27:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:33.524 18:27:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:33.524 18:27:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:33.524 18:27:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:09:33.781 18:27:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:09:33.781 18:27:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:09:33.781 18:27:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:09:33.781 18:27:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:33.781 18:27:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:33.781 18:27:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:09:33.781 18:27:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:33.781 18:27:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:33.781 18:27:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:33.781 18:27:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:09:34.039 18:27:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:09:34.039 18:27:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:09:34.039 18:27:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:09:34.039 18:27:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:34.039 18:27:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:34.039 18:27:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:09:34.039 18:27:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:34.039 18:27:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:34.039 18:27:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:34.039 18:27:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:09:34.298 18:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:09:34.298 18:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:09:34.298 18:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:09:34.298 18:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:34.298 18:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:34.298 18:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:09:34.298 18:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:34.298 18:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:34.298 18:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:34.298 18:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:09:34.558 18:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:09:34.558 18:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:09:34.558 18:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:09:34.558 18:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:34.558 18:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:34.558 18:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:09:34.558 18:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:34.558 18:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:34.558 18:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:34.558 18:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:09:34.835 18:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:09:34.835 18:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:09:34.835 18:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:09:34.835 18:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:34.835 18:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:34.835 18:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:09:34.835 18:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:34.835 18:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:34.835 18:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:34.835 18:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:34.835 18:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:35.101 18:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:35.101 18:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:35.101 18:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:35.101 18:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:35.101 18:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:09:35.101 18:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:35.101 18:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:09:35.101 18:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:09:35.101 18:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:09:35.101 18:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:09:35.101 18:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:09:35.101 18:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:09:35.101 18:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:35.101 18:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:35.101 18:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:35.101 18:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:35.101 18:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:35.101 18:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:35.101 18:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:35.101 18:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:35.101 18:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:35.101 18:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:35.101 18:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:35.101 18:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:35.101 18:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:09:35.101 18:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:35.101 18:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:35.101 18:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:09:35.360 /dev/nbd0 00:09:35.360 18:27:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:35.360 18:27:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:35.360 18:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:09:35.360 18:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:09:35.360 18:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:09:35.360 18:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:09:35.360 18:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:09:35.360 18:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:09:35.360 18:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:09:35.360 18:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:09:35.360 18:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:35.360 1+0 records in 00:09:35.360 1+0 records out 00:09:35.360 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000782611 s, 5.2 MB/s 00:09:35.360 18:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:35.360 18:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:09:35.360 18:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:35.360 18:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:09:35.360 18:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:09:35.360 18:27:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:35.360 18:27:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:35.360 18:27:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:09:35.619 /dev/nbd1 00:09:35.619 18:27:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:35.619 18:27:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:35.619 18:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:09:35.619 18:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:09:35.619 18:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:09:35.619 18:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:09:35.619 18:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:09:35.619 18:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:09:35.619 18:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:09:35.619 18:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:09:35.619 18:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:35.619 1+0 records in 00:09:35.619 1+0 records out 00:09:35.619 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000756777 s, 5.4 MB/s 00:09:35.619 18:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:35.619 18:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:09:35.619 18:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:35.619 18:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:09:35.619 18:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:09:35.619 18:27:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:35.619 18:27:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:35.619 18:27:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:09:35.619 /dev/nbd10 00:09:35.879 18:27:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:09:35.879 18:27:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:09:35.879 18:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd10 00:09:35.879 18:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:09:35.879 18:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:09:35.879 18:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:09:35.879 18:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd10 /proc/partitions 00:09:35.879 18:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:09:35.879 18:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:09:35.879 18:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:09:35.879 18:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:35.879 1+0 records in 00:09:35.879 1+0 records out 00:09:35.879 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000730639 s, 5.6 MB/s 00:09:35.879 18:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:35.879 18:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:09:35.879 18:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:35.879 18:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:09:35.879 18:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:09:35.879 18:27:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:35.879 18:27:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:35.879 18:27:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:09:35.879 /dev/nbd11 00:09:35.879 18:27:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:09:36.138 18:27:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:09:36.138 18:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd11 00:09:36.138 18:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:09:36.138 18:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:09:36.138 18:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:09:36.138 18:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd11 /proc/partitions 00:09:36.138 18:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:09:36.138 18:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:09:36.138 18:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:09:36.138 18:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:36.138 1+0 records in 00:09:36.138 1+0 records out 00:09:36.138 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000541089 s, 7.6 MB/s 00:09:36.138 18:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:36.138 18:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:09:36.138 18:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:36.138 18:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:09:36.139 18:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:09:36.139 18:27:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:36.139 18:27:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:36.139 18:27:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:09:36.139 /dev/nbd12 00:09:36.139 18:27:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:09:36.139 18:27:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:09:36.139 18:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd12 00:09:36.139 18:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:09:36.139 18:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:09:36.139 18:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:09:36.139 18:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd12 /proc/partitions 00:09:36.139 18:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:09:36.139 18:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:09:36.139 18:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:09:36.139 18:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:36.398 1+0 records in 00:09:36.398 1+0 records out 00:09:36.398 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000688458 s, 5.9 MB/s 00:09:36.398 18:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:36.398 18:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:09:36.398 18:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:36.398 18:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:09:36.398 18:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:09:36.398 18:27:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:36.398 18:27:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:36.398 18:27:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:09:36.398 /dev/nbd13 00:09:36.398 18:27:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:09:36.398 18:27:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:09:36.398 18:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd13 00:09:36.398 18:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:09:36.398 18:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:09:36.398 18:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:09:36.398 18:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd13 /proc/partitions 00:09:36.398 18:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:09:36.398 18:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:09:36.398 18:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:09:36.398 18:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:36.657 1+0 records in 00:09:36.657 1+0 records out 00:09:36.657 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000693424 s, 5.9 MB/s 00:09:36.657 18:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:36.657 18:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:09:36.657 18:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:36.657 18:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:09:36.657 18:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:09:36.657 18:27:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:36.657 18:27:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:36.657 18:27:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:09:36.657 /dev/nbd14 00:09:36.657 18:27:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:09:36.657 18:27:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:09:36.657 18:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd14 00:09:36.657 18:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:09:36.657 18:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:09:36.657 18:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:09:36.657 18:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd14 /proc/partitions 00:09:36.657 18:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:09:36.657 18:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:09:36.657 18:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:09:36.657 18:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:36.657 1+0 records in 00:09:36.657 1+0 records out 00:09:36.657 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000701333 s, 5.8 MB/s 00:09:36.657 18:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:36.916 18:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:09:36.916 18:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:36.916 18:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:09:36.916 18:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:09:36.916 18:27:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:36.916 18:27:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:36.916 18:27:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:36.916 18:27:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:36.916 18:27:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:36.916 18:27:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:36.916 { 00:09:36.916 "nbd_device": "/dev/nbd0", 00:09:36.916 "bdev_name": "Nvme0n1" 00:09:36.916 }, 00:09:36.916 { 00:09:36.916 "nbd_device": "/dev/nbd1", 00:09:36.916 "bdev_name": "Nvme1n1p1" 00:09:36.916 }, 00:09:36.916 { 00:09:36.916 "nbd_device": "/dev/nbd10", 00:09:36.916 "bdev_name": "Nvme1n1p2" 00:09:36.916 }, 00:09:36.916 { 00:09:36.916 "nbd_device": "/dev/nbd11", 00:09:36.916 "bdev_name": "Nvme2n1" 00:09:36.916 }, 00:09:36.916 { 00:09:36.916 "nbd_device": "/dev/nbd12", 00:09:36.916 "bdev_name": "Nvme2n2" 00:09:36.916 }, 00:09:36.916 { 00:09:36.916 "nbd_device": "/dev/nbd13", 00:09:36.916 "bdev_name": "Nvme2n3" 00:09:36.916 }, 00:09:36.916 { 00:09:36.916 "nbd_device": "/dev/nbd14", 00:09:36.916 "bdev_name": "Nvme3n1" 00:09:36.916 } 00:09:36.916 ]' 00:09:36.916 18:27:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:36.916 18:27:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:36.916 { 00:09:36.916 "nbd_device": "/dev/nbd0", 00:09:36.916 "bdev_name": "Nvme0n1" 00:09:36.916 }, 00:09:36.916 { 00:09:36.916 "nbd_device": "/dev/nbd1", 00:09:36.916 "bdev_name": "Nvme1n1p1" 00:09:36.916 }, 00:09:36.916 { 00:09:36.916 "nbd_device": "/dev/nbd10", 00:09:36.916 "bdev_name": "Nvme1n1p2" 00:09:36.916 }, 00:09:36.916 { 00:09:36.916 "nbd_device": "/dev/nbd11", 00:09:36.916 "bdev_name": "Nvme2n1" 00:09:36.916 }, 00:09:36.916 { 00:09:36.916 "nbd_device": "/dev/nbd12", 00:09:36.916 "bdev_name": "Nvme2n2" 00:09:36.916 }, 00:09:36.916 { 00:09:36.916 "nbd_device": "/dev/nbd13", 00:09:36.916 "bdev_name": "Nvme2n3" 00:09:36.916 }, 00:09:36.916 { 00:09:36.916 "nbd_device": "/dev/nbd14", 00:09:36.916 "bdev_name": "Nvme3n1" 00:09:36.916 } 00:09:36.916 ]' 00:09:36.916 18:27:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:36.916 /dev/nbd1 00:09:36.916 /dev/nbd10 00:09:36.916 /dev/nbd11 00:09:36.916 /dev/nbd12 00:09:36.916 /dev/nbd13 00:09:36.916 /dev/nbd14' 00:09:37.175 18:27:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:37.175 /dev/nbd1 00:09:37.175 /dev/nbd10 00:09:37.175 /dev/nbd11 00:09:37.175 /dev/nbd12 00:09:37.175 /dev/nbd13 00:09:37.175 /dev/nbd14' 00:09:37.175 18:27:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:37.175 18:27:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:09:37.175 18:27:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:09:37.175 18:27:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:09:37.175 18:27:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:09:37.175 18:27:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:09:37.175 18:27:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:37.175 18:27:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:37.175 18:27:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:37.175 18:27:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:37.175 18:27:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:37.175 18:27:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:09:37.175 256+0 records in 00:09:37.175 256+0 records out 00:09:37.175 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00754715 s, 139 MB/s 00:09:37.175 18:27:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:37.175 18:27:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:37.175 256+0 records in 00:09:37.175 256+0 records out 00:09:37.175 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.108894 s, 9.6 MB/s 00:09:37.175 18:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:37.175 18:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:37.175 256+0 records in 00:09:37.175 256+0 records out 00:09:37.175 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.111272 s, 9.4 MB/s 00:09:37.435 18:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:37.435 18:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:09:37.435 256+0 records in 00:09:37.435 256+0 records out 00:09:37.435 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.107483 s, 9.8 MB/s 00:09:37.435 18:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:37.435 18:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:09:37.435 256+0 records in 00:09:37.435 256+0 records out 00:09:37.435 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.107624 s, 9.7 MB/s 00:09:37.435 18:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:37.435 18:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:09:37.694 256+0 records in 00:09:37.694 256+0 records out 00:09:37.694 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.109534 s, 9.6 MB/s 00:09:37.694 18:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:37.694 18:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:09:37.694 256+0 records in 00:09:37.694 256+0 records out 00:09:37.694 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.113343 s, 9.3 MB/s 00:09:37.694 18:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:37.694 18:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:09:37.953 256+0 records in 00:09:37.953 256+0 records out 00:09:37.953 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.109293 s, 9.6 MB/s 00:09:37.953 18:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:09:37.953 18:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:37.953 18:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:37.953 18:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:37.953 18:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:37.953 18:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:37.953 18:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:37.953 18:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:37.953 18:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:09:37.953 18:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:37.953 18:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:09:37.953 18:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:37.953 18:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:09:37.953 18:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:37.953 18:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:09:37.953 18:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:37.953 18:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:09:37.953 18:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:37.953 18:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:09:37.953 18:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:37.953 18:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:09:37.953 18:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:37.953 18:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:37.953 18:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:37.953 18:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:37.953 18:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:37.953 18:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:37.953 18:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:37.953 18:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:38.212 18:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:38.212 18:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:38.212 18:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:38.212 18:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:38.212 18:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:38.212 18:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:38.213 18:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:38.213 18:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:38.213 18:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:38.213 18:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:38.471 18:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:38.472 18:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:38.472 18:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:38.472 18:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:38.472 18:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:38.472 18:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:38.472 18:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:38.472 18:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:38.472 18:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:38.472 18:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:09:38.730 18:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:09:38.730 18:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:09:38.730 18:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:09:38.730 18:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:38.730 18:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:38.730 18:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:09:38.730 18:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:38.730 18:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:38.730 18:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:38.730 18:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:09:38.730 18:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:09:38.730 18:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:09:38.730 18:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:09:38.730 18:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:38.730 18:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:38.730 18:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:09:38.730 18:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:38.731 18:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:38.731 18:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:38.731 18:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:09:38.989 18:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:09:38.989 18:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:09:38.989 18:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:09:38.989 18:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:38.989 18:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:38.989 18:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:09:38.989 18:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:38.989 18:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:38.989 18:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:38.989 18:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:09:39.249 18:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:09:39.249 18:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:09:39.249 18:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:09:39.249 18:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:39.249 18:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:39.249 18:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:09:39.249 18:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:39.249 18:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:39.249 18:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:39.249 18:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:09:39.508 18:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:09:39.508 18:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:09:39.508 18:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:09:39.508 18:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:39.508 18:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:39.508 18:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:09:39.508 18:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:39.508 18:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:39.508 18:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:39.508 18:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:39.508 18:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:39.767 18:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:39.767 18:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:39.767 18:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:39.767 18:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:39.767 18:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:09:39.767 18:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:39.767 18:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:09:39.767 18:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:09:39.767 18:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:09:39.767 18:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:09:39.767 18:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:39.767 18:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:09:39.767 18:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:39.767 18:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:39.767 18:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:39.767 18:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:09:39.767 18:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:09:39.767 18:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:09:40.026 malloc_lvol_verify 00:09:40.026 18:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:09:40.285 497e2dc9-e3fd-4038-b296-72ae5c309330 00:09:40.285 18:27:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:09:40.545 efbba0b0-4175-4adc-932a-9e0a44645d92 00:09:40.545 18:27:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:09:40.804 /dev/nbd0 00:09:40.804 18:27:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:09:40.804 mke2fs 1.46.5 (30-Dec-2021) 00:09:40.804 Discarding device blocks: 0/4096 done 00:09:40.804 Creating filesystem with 4096 1k blocks and 1024 inodes 00:09:40.804 00:09:40.804 Allocating group tables: 0/1 done 00:09:40.804 Writing inode tables: 0/1 done 00:09:40.804 Creating journal (1024 blocks): done 00:09:40.804 Writing superblocks and filesystem accounting information: 0/1 done 00:09:40.804 00:09:40.804 18:27:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:09:40.804 18:27:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:09:40.804 18:27:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:40.804 18:27:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:40.804 18:27:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:40.804 18:27:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:40.804 18:27:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:40.804 18:27:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:41.063 18:27:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:41.063 18:27:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:41.063 18:27:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:41.063 18:27:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:41.063 18:27:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:41.063 18:27:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:41.063 18:27:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:41.063 18:27:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:41.064 18:27:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:09:41.064 18:27:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:09:41.064 18:27:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 79190 00:09:41.064 18:27:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@946 -- # '[' -z 79190 ']' 00:09:41.064 18:27:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@950 -- # kill -0 79190 00:09:41.064 18:27:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@951 -- # uname 00:09:41.064 18:27:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:41.064 18:27:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 79190 00:09:41.064 18:27:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:41.064 18:27:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:41.064 killing process with pid 79190 00:09:41.064 18:27:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # echo 'killing process with pid 79190' 00:09:41.064 18:27:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@965 -- # kill 79190 00:09:41.064 18:27:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@970 -- # wait 79190 00:09:41.323 18:27:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:09:41.323 00:09:41.323 real 0m11.089s 00:09:41.323 user 0m15.491s 00:09:41.323 sys 0m4.349s 00:09:41.323 18:27:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:41.323 18:27:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:09:41.323 ************************************ 00:09:41.323 END TEST bdev_nbd 00:09:41.323 ************************************ 00:09:41.323 18:27:41 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:09:41.323 18:27:41 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:09:41.323 18:27:41 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:09:41.323 skipping fio tests on NVMe due to multi-ns failures. 00:09:41.323 18:27:41 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:09:41.323 18:27:41 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:41.323 18:27:41 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:41.323 18:27:41 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:09:41.323 18:27:41 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:41.323 18:27:41 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:41.323 ************************************ 00:09:41.323 START TEST bdev_verify 00:09:41.323 ************************************ 00:09:41.323 18:27:41 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:41.581 [2024-07-23 18:27:41.466163] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:09:41.581 [2024-07-23 18:27:41.466326] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79601 ] 00:09:41.582 [2024-07-23 18:27:41.617776] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:41.840 [2024-07-23 18:27:41.697514] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.840 [2024-07-23 18:27:41.697638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:42.409 Running I/O for 5 seconds... 00:09:47.702 00:09:47.702 Latency(us) 00:09:47.702 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:47.702 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:47.702 Verification LBA range: start 0x0 length 0xbd0bd 00:09:47.702 Nvme0n1 : 5.06 1289.07 5.04 0.00 0.00 99095.22 23123.62 84252.39 00:09:47.702 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:47.702 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:09:47.702 Nvme0n1 : 5.08 1108.54 4.33 0.00 0.00 115233.02 14996.01 92494.48 00:09:47.702 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:47.702 Verification LBA range: start 0x0 length 0x4ff80 00:09:47.702 Nvme1n1p1 : 5.07 1288.56 5.03 0.00 0.00 98991.84 22780.20 84710.29 00:09:47.702 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:47.702 Verification LBA range: start 0x4ff80 length 0x4ff80 00:09:47.702 Nvme1n1p1 : 5.08 1108.21 4.33 0.00 0.00 114951.17 14480.88 86083.97 00:09:47.702 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:47.702 Verification LBA range: start 0x0 length 0x4ff7f 00:09:47.702 Nvme1n1p2 : 5.07 1288.07 5.03 0.00 0.00 98886.50 21292.05 83336.61 00:09:47.702 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:47.702 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:09:47.702 Nvme1n1p2 : 5.08 1107.85 4.33 0.00 0.00 114728.43 14080.22 89289.22 00:09:47.702 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:47.702 Verification LBA range: start 0x0 length 0x80000 00:09:47.702 Nvme2n1 : 5.07 1287.63 5.03 0.00 0.00 98750.28 19918.37 81505.03 00:09:47.702 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:47.702 Verification LBA range: start 0x80000 length 0x80000 00:09:47.702 Nvme2n1 : 5.09 1107.54 4.33 0.00 0.00 114517.83 14137.46 85168.18 00:09:47.702 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:47.702 Verification LBA range: start 0x0 length 0x80000 00:09:47.702 Nvme2n2 : 5.07 1287.21 5.03 0.00 0.00 98630.19 19231.52 81047.14 00:09:47.702 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:47.702 Verification LBA range: start 0x80000 length 0x80000 00:09:47.702 Nvme2n2 : 5.09 1107.22 4.33 0.00 0.00 114279.77 13965.75 85626.08 00:09:47.702 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:47.702 Verification LBA range: start 0x0 length 0x80000 00:09:47.702 Nvme2n3 : 5.07 1286.77 5.03 0.00 0.00 98503.58 18315.74 80589.25 00:09:47.702 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:47.702 Verification LBA range: start 0x80000 length 0x80000 00:09:47.702 Nvme2n3 : 5.09 1106.91 4.32 0.00 0.00 114163.25 13908.51 84710.29 00:09:47.702 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:47.702 Verification LBA range: start 0x0 length 0x20000 00:09:47.702 Nvme3n1 : 5.07 1286.33 5.02 0.00 0.00 98409.09 14652.59 81962.93 00:09:47.702 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:47.702 Verification LBA range: start 0x20000 length 0x20000 00:09:47.702 Nvme3n1 : 5.09 1106.61 4.32 0.00 0.00 114054.07 13794.04 88373.44 00:09:47.702 =================================================================================================================== 00:09:47.702 Total : 16766.52 65.49 0.00 0.00 106074.31 13794.04 92494.48 00:09:47.962 00:09:47.962 real 0m6.607s 00:09:47.962 user 0m12.189s 00:09:47.962 sys 0m0.353s 00:09:47.962 18:27:47 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:47.962 18:27:47 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:09:47.962 ************************************ 00:09:47.962 END TEST bdev_verify 00:09:47.962 ************************************ 00:09:48.221 18:27:48 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:48.221 18:27:48 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:09:48.221 18:27:48 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:48.221 18:27:48 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:48.221 ************************************ 00:09:48.221 START TEST bdev_verify_big_io 00:09:48.221 ************************************ 00:09:48.221 18:27:48 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:48.222 [2024-07-23 18:27:48.136855] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:09:48.222 [2024-07-23 18:27:48.137008] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79694 ] 00:09:48.481 [2024-07-23 18:27:48.288028] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:48.481 [2024-07-23 18:27:48.366910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.481 [2024-07-23 18:27:48.367009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:49.050 Running I/O for 5 seconds... 00:09:55.619 00:09:55.619 Latency(us) 00:09:55.619 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:55.619 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:55.619 Verification LBA range: start 0x0 length 0xbd0b 00:09:55.619 Nvme0n1 : 5.55 158.42 9.90 0.00 0.00 779745.98 22093.36 1201512.41 00:09:55.619 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:55.619 Verification LBA range: start 0xbd0b length 0xbd0b 00:09:55.619 Nvme0n1 : 5.72 79.75 4.98 0.00 0.00 1525293.79 22665.73 1685047.90 00:09:55.620 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:55.620 Verification LBA range: start 0x0 length 0x4ff8 00:09:55.620 Nvme1n1p1 : 5.56 185.21 11.58 0.00 0.00 655351.88 61357.72 633724.53 00:09:55.620 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:55.620 Verification LBA range: start 0x4ff8 length 0x4ff8 00:09:55.620 Nvme1n1p1 : 5.72 89.51 5.59 0.00 0.00 1300059.67 64105.08 1362690.91 00:09:55.620 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:55.620 Verification LBA range: start 0x0 length 0x4ff7 00:09:55.620 Nvme1n1p2 : 5.56 188.01 11.75 0.00 0.00 639428.92 72347.17 611745.65 00:09:55.620 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:55.620 Verification LBA range: start 0x4ff7 length 0x4ff7 00:09:55.620 Nvme1n1p2 : 5.80 99.29 6.21 0.00 0.00 1125775.36 38692.00 1238143.89 00:09:55.620 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:55.620 Verification LBA range: start 0x0 length 0x8000 00:09:55.620 Nvme2n1 : 5.58 194.32 12.15 0.00 0.00 613748.72 21520.99 692334.90 00:09:55.620 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:55.620 Verification LBA range: start 0x8000 length 0x8000 00:09:55.620 Nvme2n1 : 5.95 111.59 6.97 0.00 0.00 958867.50 29763.07 1780289.73 00:09:55.620 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:55.620 Verification LBA range: start 0x0 length 0x8000 00:09:55.620 Nvme2n2 : 5.59 195.27 12.20 0.00 0.00 600776.33 21177.57 703324.34 00:09:55.620 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:55.620 Verification LBA range: start 0x8000 length 0x8000 00:09:55.620 Nvme2n2 : 6.17 143.18 8.95 0.00 0.00 720735.10 19918.37 2315109.28 00:09:55.620 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:55.620 Verification LBA range: start 0x0 length 0x8000 00:09:55.620 Nvme2n3 : 5.60 202.01 12.63 0.00 0.00 573376.53 6296.03 714313.78 00:09:55.620 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:55.620 Verification LBA range: start 0x8000 length 0x8000 00:09:55.620 Nvme2n3 : 6.34 185.62 11.60 0.00 0.00 538768.71 7784.19 2637466.27 00:09:55.620 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:55.620 Verification LBA range: start 0x0 length 0x2000 00:09:55.620 Nvme3n1 : 5.60 205.65 12.85 0.00 0.00 554068.25 4693.41 725303.22 00:09:55.620 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:55.620 Verification LBA range: start 0x2000 length 0x2000 00:09:55.620 Nvme3n1 : 6.46 246.30 15.39 0.00 0.00 394132.34 1116.12 2652118.86 00:09:55.620 =================================================================================================================== 00:09:55.620 Total : 2284.14 142.76 0.00 0.00 691713.43 1116.12 2652118.86 00:09:56.578 00:09:56.578 real 0m8.418s 00:09:56.578 user 0m15.757s 00:09:56.578 sys 0m0.387s 00:09:56.578 18:27:56 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:56.578 18:27:56 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:09:56.578 ************************************ 00:09:56.578 END TEST bdev_verify_big_io 00:09:56.578 ************************************ 00:09:56.578 18:27:56 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:56.578 18:27:56 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:09:56.578 18:27:56 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:56.578 18:27:56 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:56.578 ************************************ 00:09:56.578 START TEST bdev_write_zeroes 00:09:56.578 ************************************ 00:09:56.578 18:27:56 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:56.578 [2024-07-23 18:27:56.609047] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:09:56.578 [2024-07-23 18:27:56.609198] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79803 ] 00:09:56.836 [2024-07-23 18:27:56.756814] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:56.836 [2024-07-23 18:27:56.835025] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:57.402 Running I/O for 1 seconds... 00:09:58.334 00:09:58.334 Latency(us) 00:09:58.335 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:58.335 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:58.335 Nvme0n1 : 1.01 8520.17 33.28 0.00 0.00 14966.33 11847.99 28274.92 00:09:58.335 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:58.335 Nvme1n1p1 : 1.02 8509.20 33.24 0.00 0.00 14958.37 12248.65 28160.45 00:09:58.335 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:58.335 Nvme1n1p2 : 1.02 8526.94 33.31 0.00 0.00 14884.63 10646.02 25870.98 00:09:58.335 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:58.335 Nvme2n1 : 1.02 8562.94 33.45 0.00 0.00 14754.40 6324.65 24611.77 00:09:58.335 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:58.335 Nvme2n2 : 1.03 8553.50 33.41 0.00 0.00 14725.87 6410.51 24840.72 00:09:58.335 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:58.335 Nvme2n3 : 1.03 8544.25 33.38 0.00 0.00 14701.00 6839.78 25184.14 00:09:58.335 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:58.335 Nvme3n1 : 1.03 8472.30 33.09 0.00 0.00 14798.23 6897.02 25642.03 00:09:58.335 =================================================================================================================== 00:09:58.335 Total : 59689.31 233.16 0.00 0.00 14826.38 6324.65 28274.92 00:09:58.903 00:09:58.903 real 0m2.200s 00:09:58.903 user 0m1.788s 00:09:58.903 sys 0m0.302s 00:09:58.903 18:27:58 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:58.903 18:27:58 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:09:58.903 ************************************ 00:09:58.903 END TEST bdev_write_zeroes 00:09:58.903 ************************************ 00:09:58.903 18:27:58 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:58.903 18:27:58 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:09:58.903 18:27:58 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:58.903 18:27:58 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:58.903 ************************************ 00:09:58.903 START TEST bdev_json_nonenclosed 00:09:58.903 ************************************ 00:09:58.903 18:27:58 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:58.903 [2024-07-23 18:27:58.888984] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:09:58.903 [2024-07-23 18:27:58.889120] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79845 ] 00:09:59.162 [2024-07-23 18:27:59.037481] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:59.162 [2024-07-23 18:27:59.119396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.162 [2024-07-23 18:27:59.119513] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:09:59.162 [2024-07-23 18:27:59.119539] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:59.162 [2024-07-23 18:27:59.119562] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:59.421 00:09:59.421 real 0m0.489s 00:09:59.421 user 0m0.247s 00:09:59.421 sys 0m0.137s 00:09:59.421 18:27:59 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:59.421 18:27:59 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:09:59.421 ************************************ 00:09:59.421 END TEST bdev_json_nonenclosed 00:09:59.421 ************************************ 00:09:59.421 18:27:59 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:59.421 18:27:59 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:09:59.421 18:27:59 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:59.421 18:27:59 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:59.421 ************************************ 00:09:59.421 START TEST bdev_json_nonarray 00:09:59.421 ************************************ 00:09:59.421 18:27:59 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:59.421 [2024-07-23 18:27:59.425111] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:09:59.421 [2024-07-23 18:27:59.425266] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79876 ] 00:09:59.679 [2024-07-23 18:27:59.574106] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:59.679 [2024-07-23 18:27:59.657979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.679 [2024-07-23 18:27:59.658153] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:09:59.679 [2024-07-23 18:27:59.658183] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:59.679 [2024-07-23 18:27:59.658209] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:59.938 00:09:59.938 real 0m0.475s 00:09:59.938 user 0m0.239s 00:09:59.938 sys 0m0.132s 00:09:59.938 18:27:59 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:59.938 18:27:59 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:09:59.938 ************************************ 00:09:59.938 END TEST bdev_json_nonarray 00:09:59.938 ************************************ 00:09:59.938 18:27:59 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:09:59.938 18:27:59 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:09:59.938 18:27:59 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:09:59.938 18:27:59 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:59.938 18:27:59 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:59.938 18:27:59 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:59.939 ************************************ 00:09:59.939 START TEST bdev_gpt_uuid 00:09:59.939 ************************************ 00:09:59.939 18:27:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1121 -- # bdev_gpt_uuid 00:09:59.939 18:27:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:09:59.939 18:27:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:09:59.939 18:27:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=79896 00:09:59.939 18:27:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:09:59.939 18:27:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:59.939 18:27:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 79896 00:09:59.939 18:27:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@827 -- # '[' -z 79896 ']' 00:09:59.939 18:27:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:59.939 18:27:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:59.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:59.939 18:27:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:59.939 18:27:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:59.939 18:27:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:59.939 [2024-07-23 18:27:59.984415] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:09:59.939 [2024-07-23 18:27:59.984557] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79896 ] 00:10:00.198 [2024-07-23 18:28:00.132988] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:00.198 [2024-07-23 18:28:00.184546] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.767 18:28:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:00.767 18:28:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@860 -- # return 0 00:10:00.767 18:28:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:00.767 18:28:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:00.767 18:28:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:10:01.336 Some configs were skipped because the RPC state that can call them passed over. 00:10:01.336 18:28:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:01.336 18:28:01 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:10:01.336 18:28:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:01.336 18:28:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:10:01.336 18:28:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:01.336 18:28:01 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:10:01.336 18:28:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:01.336 18:28:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:10:01.336 18:28:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:01.336 18:28:01 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:10:01.336 { 00:10:01.336 "name": "Nvme1n1p1", 00:10:01.336 "aliases": [ 00:10:01.336 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:10:01.336 ], 00:10:01.336 "product_name": "GPT Disk", 00:10:01.336 "block_size": 4096, 00:10:01.336 "num_blocks": 655104, 00:10:01.336 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:10:01.336 "assigned_rate_limits": { 00:10:01.336 "rw_ios_per_sec": 0, 00:10:01.336 "rw_mbytes_per_sec": 0, 00:10:01.336 "r_mbytes_per_sec": 0, 00:10:01.336 "w_mbytes_per_sec": 0 00:10:01.336 }, 00:10:01.336 "claimed": false, 00:10:01.336 "zoned": false, 00:10:01.336 "supported_io_types": { 00:10:01.336 "read": true, 00:10:01.336 "write": true, 00:10:01.336 "unmap": true, 00:10:01.336 "write_zeroes": true, 00:10:01.336 "flush": true, 00:10:01.336 "reset": true, 00:10:01.336 "compare": true, 00:10:01.336 "compare_and_write": false, 00:10:01.336 "abort": true, 00:10:01.336 "nvme_admin": false, 00:10:01.336 "nvme_io": false 00:10:01.336 }, 00:10:01.336 "driver_specific": { 00:10:01.336 "gpt": { 00:10:01.336 "base_bdev": "Nvme1n1", 00:10:01.336 "offset_blocks": 256, 00:10:01.336 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:10:01.336 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:10:01.336 "partition_name": "SPDK_TEST_first" 00:10:01.336 } 00:10:01.336 } 00:10:01.336 } 00:10:01.336 ]' 00:10:01.336 18:28:01 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:10:01.336 18:28:01 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:10:01.336 18:28:01 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:10:01.336 18:28:01 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:10:01.336 18:28:01 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:10:01.336 18:28:01 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:10:01.336 18:28:01 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:10:01.336 18:28:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:01.336 18:28:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:10:01.336 18:28:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:01.336 18:28:01 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:10:01.336 { 00:10:01.336 "name": "Nvme1n1p2", 00:10:01.336 "aliases": [ 00:10:01.336 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:10:01.336 ], 00:10:01.336 "product_name": "GPT Disk", 00:10:01.336 "block_size": 4096, 00:10:01.336 "num_blocks": 655103, 00:10:01.336 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:10:01.336 "assigned_rate_limits": { 00:10:01.336 "rw_ios_per_sec": 0, 00:10:01.336 "rw_mbytes_per_sec": 0, 00:10:01.336 "r_mbytes_per_sec": 0, 00:10:01.336 "w_mbytes_per_sec": 0 00:10:01.336 }, 00:10:01.336 "claimed": false, 00:10:01.336 "zoned": false, 00:10:01.336 "supported_io_types": { 00:10:01.336 "read": true, 00:10:01.336 "write": true, 00:10:01.336 "unmap": true, 00:10:01.336 "write_zeroes": true, 00:10:01.336 "flush": true, 00:10:01.336 "reset": true, 00:10:01.336 "compare": true, 00:10:01.336 "compare_and_write": false, 00:10:01.336 "abort": true, 00:10:01.336 "nvme_admin": false, 00:10:01.336 "nvme_io": false 00:10:01.337 }, 00:10:01.337 "driver_specific": { 00:10:01.337 "gpt": { 00:10:01.337 "base_bdev": "Nvme1n1", 00:10:01.337 "offset_blocks": 655360, 00:10:01.337 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:10:01.337 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:10:01.337 "partition_name": "SPDK_TEST_second" 00:10:01.337 } 00:10:01.337 } 00:10:01.337 } 00:10:01.337 ]' 00:10:01.337 18:28:01 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:10:01.337 18:28:01 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:10:01.337 18:28:01 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:10:01.595 18:28:01 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:10:01.595 18:28:01 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:10:01.595 18:28:01 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:10:01.595 18:28:01 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 79896 00:10:01.595 18:28:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@946 -- # '[' -z 79896 ']' 00:10:01.595 18:28:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@950 -- # kill -0 79896 00:10:01.595 18:28:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@951 -- # uname 00:10:01.595 18:28:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:01.595 18:28:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 79896 00:10:01.595 18:28:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:01.595 18:28:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:01.595 killing process with pid 79896 00:10:01.595 18:28:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # echo 'killing process with pid 79896' 00:10:01.595 18:28:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@965 -- # kill 79896 00:10:01.595 18:28:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@970 -- # wait 79896 00:10:02.163 00:10:02.163 real 0m2.220s 00:10:02.163 user 0m2.342s 00:10:02.163 sys 0m0.472s 00:10:02.163 18:28:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:02.163 18:28:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:10:02.163 ************************************ 00:10:02.163 END TEST bdev_gpt_uuid 00:10:02.163 ************************************ 00:10:02.163 18:28:02 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:10:02.163 18:28:02 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:10:02.163 18:28:02 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:10:02.163 18:28:02 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:10:02.163 18:28:02 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:02.163 18:28:02 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:10:02.163 18:28:02 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:10:02.163 18:28:02 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:10:02.163 18:28:02 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:02.731 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:02.990 Waiting for block devices as requested 00:10:02.990 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:02.990 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:03.249 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:03.249 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:08.535 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:08.535 18:28:08 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:10:08.535 18:28:08 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:10:08.535 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:10:08.535 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:10:08.535 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:10:08.535 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:10:08.535 18:28:08 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:10:08.535 00:10:08.535 real 0m53.248s 00:10:08.535 user 1m5.693s 00:10:08.535 sys 0m10.873s 00:10:08.535 18:28:08 blockdev_nvme_gpt -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:08.535 18:28:08 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:08.535 ************************************ 00:10:08.535 END TEST blockdev_nvme_gpt 00:10:08.535 ************************************ 00:10:08.795 18:28:08 -- spdk/autotest.sh@216 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:10:08.795 18:28:08 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:08.795 18:28:08 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:08.795 18:28:08 -- common/autotest_common.sh@10 -- # set +x 00:10:08.795 ************************************ 00:10:08.795 START TEST nvme 00:10:08.795 ************************************ 00:10:08.795 18:28:08 nvme -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:10:08.795 * Looking for test storage... 00:10:08.795 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:08.795 18:28:08 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:09.363 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:10.300 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:10.300 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:10.300 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:10.300 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:10.300 18:28:10 nvme -- nvme/nvme.sh@79 -- # uname 00:10:10.300 18:28:10 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:10:10.300 18:28:10 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:10:10.300 18:28:10 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:10:10.300 18:28:10 nvme -- common/autotest_common.sh@1078 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:10:10.300 18:28:10 nvme -- common/autotest_common.sh@1064 -- # _randomize_va_space=2 00:10:10.300 18:28:10 nvme -- common/autotest_common.sh@1065 -- # echo 0 00:10:10.300 18:28:10 nvme -- common/autotest_common.sh@1067 -- # stubpid=80521 00:10:10.300 18:28:10 nvme -- common/autotest_common.sh@1066 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:10:10.300 Waiting for stub to ready for secondary processes... 00:10:10.300 18:28:10 nvme -- common/autotest_common.sh@1068 -- # echo Waiting for stub to ready for secondary processes... 00:10:10.300 18:28:10 nvme -- common/autotest_common.sh@1069 -- # '[' -e /var/run/spdk_stub0 ']' 00:10:10.300 18:28:10 nvme -- common/autotest_common.sh@1071 -- # [[ -e /proc/80521 ]] 00:10:10.300 18:28:10 nvme -- common/autotest_common.sh@1072 -- # sleep 1s 00:10:10.300 [2024-07-23 18:28:10.313471] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:10:10.300 [2024-07-23 18:28:10.314022] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:10:11.238 [2024-07-23 18:28:11.250529] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:11.238 18:28:11 nvme -- common/autotest_common.sh@1069 -- # '[' -e /var/run/spdk_stub0 ']' 00:10:11.238 18:28:11 nvme -- common/autotest_common.sh@1071 -- # [[ -e /proc/80521 ]] 00:10:11.238 18:28:11 nvme -- common/autotest_common.sh@1072 -- # sleep 1s 00:10:11.238 [2024-07-23 18:28:11.283461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:11.238 [2024-07-23 18:28:11.283554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:11.238 [2024-07-23 18:28:11.283715] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:11.497 [2024-07-23 18:28:11.293605] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:10:11.497 [2024-07-23 18:28:11.293645] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:10:11.497 [2024-07-23 18:28:11.311025] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:10:11.497 [2024-07-23 18:28:11.311218] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:10:11.497 [2024-07-23 18:28:11.312015] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:10:11.497 [2024-07-23 18:28:11.312253] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:10:11.497 [2024-07-23 18:28:11.312335] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:10:11.497 [2024-07-23 18:28:11.313036] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:10:11.497 [2024-07-23 18:28:11.313304] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:10:11.497 [2024-07-23 18:28:11.313377] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:10:11.497 [2024-07-23 18:28:11.314075] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:10:11.497 [2024-07-23 18:28:11.314274] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:10:11.497 [2024-07-23 18:28:11.314350] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:10:11.497 [2024-07-23 18:28:11.314418] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:10:11.497 [2024-07-23 18:28:11.314482] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:10:12.435 done. 00:10:12.435 18:28:12 nvme -- common/autotest_common.sh@1069 -- # '[' -e /var/run/spdk_stub0 ']' 00:10:12.435 18:28:12 nvme -- common/autotest_common.sh@1074 -- # echo done. 00:10:12.435 18:28:12 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:10:12.435 18:28:12 nvme -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:10:12.435 18:28:12 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:12.435 18:28:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:12.435 ************************************ 00:10:12.435 START TEST nvme_reset 00:10:12.435 ************************************ 00:10:12.435 18:28:12 nvme.nvme_reset -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:10:12.694 Initializing NVMe Controllers 00:10:12.694 Skipping QEMU NVMe SSD at 0000:00:10.0 00:10:12.694 Skipping QEMU NVMe SSD at 0000:00:11.0 00:10:12.694 Skipping QEMU NVMe SSD at 0000:00:13.0 00:10:12.694 Skipping QEMU NVMe SSD at 0000:00:12.0 00:10:12.694 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:10:12.694 00:10:12.694 real 0m0.219s 00:10:12.694 user 0m0.065s 00:10:12.694 sys 0m0.107s 00:10:12.694 18:28:12 nvme.nvme_reset -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:12.694 18:28:12 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:10:12.694 ************************************ 00:10:12.694 END TEST nvme_reset 00:10:12.694 ************************************ 00:10:12.694 18:28:12 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:10:12.694 18:28:12 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:12.694 18:28:12 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:12.694 18:28:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:12.694 ************************************ 00:10:12.694 START TEST nvme_identify 00:10:12.694 ************************************ 00:10:12.694 18:28:12 nvme.nvme_identify -- common/autotest_common.sh@1121 -- # nvme_identify 00:10:12.694 18:28:12 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:10:12.694 18:28:12 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:10:12.694 18:28:12 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:10:12.694 18:28:12 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:10:12.694 18:28:12 nvme.nvme_identify -- common/autotest_common.sh@1509 -- # bdfs=() 00:10:12.694 18:28:12 nvme.nvme_identify -- common/autotest_common.sh@1509 -- # local bdfs 00:10:12.694 18:28:12 nvme.nvme_identify -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:12.694 18:28:12 nvme.nvme_identify -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:12.694 18:28:12 nvme.nvme_identify -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:10:12.694 18:28:12 nvme.nvme_identify -- common/autotest_common.sh@1511 -- # (( 4 == 0 )) 00:10:12.694 18:28:12 nvme.nvme_identify -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:12.694 18:28:12 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:10:12.956 [2024-07-23 18:28:12.871287] nvme_ctrlr.c:3486:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 80554 terminated unexpected 00:10:12.956 ===================================================== 00:10:12.956 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:12.956 ===================================================== 00:10:12.956 Controller Capabilities/Features 00:10:12.956 ================================ 00:10:12.956 Vendor ID: 1b36 00:10:12.956 Subsystem Vendor ID: 1af4 00:10:12.956 Serial Number: 12340 00:10:12.956 Model Number: QEMU NVMe Ctrl 00:10:12.956 Firmware Version: 8.0.0 00:10:12.956 Recommended Arb Burst: 6 00:10:12.956 IEEE OUI Identifier: 00 54 52 00:10:12.956 Multi-path I/O 00:10:12.956 May have multiple subsystem ports: No 00:10:12.956 May have multiple controllers: No 00:10:12.956 Associated with SR-IOV VF: No 00:10:12.956 Max Data Transfer Size: 524288 00:10:12.956 Max Number of Namespaces: 256 00:10:12.956 Max Number of I/O Queues: 64 00:10:12.956 NVMe Specification Version (VS): 1.4 00:10:12.956 NVMe Specification Version (Identify): 1.4 00:10:12.956 Maximum Queue Entries: 2048 00:10:12.956 Contiguous Queues Required: Yes 00:10:12.956 Arbitration Mechanisms Supported 00:10:12.956 Weighted Round Robin: Not Supported 00:10:12.956 Vendor Specific: Not Supported 00:10:12.956 Reset Timeout: 7500 ms 00:10:12.956 Doorbell Stride: 4 bytes 00:10:12.956 NVM Subsystem Reset: Not Supported 00:10:12.956 Command Sets Supported 00:10:12.956 NVM Command Set: Supported 00:10:12.956 Boot Partition: Not Supported 00:10:12.956 Memory Page Size Minimum: 4096 bytes 00:10:12.956 Memory Page Size Maximum: 65536 bytes 00:10:12.956 Persistent Memory Region: Not Supported 00:10:12.956 Optional Asynchronous Events Supported 00:10:12.956 Namespace Attribute Notices: Supported 00:10:12.956 Firmware Activation Notices: Not Supported 00:10:12.956 ANA Change Notices: Not Supported 00:10:12.956 PLE Aggregate Log Change Notices: Not Supported 00:10:12.956 LBA Status Info Alert Notices: Not Supported 00:10:12.956 EGE Aggregate Log Change Notices: Not Supported 00:10:12.956 Normal NVM Subsystem Shutdown event: Not Supported 00:10:12.956 Zone Descriptor Change Notices: Not Supported 00:10:12.956 Discovery Log Change Notices: Not Supported 00:10:12.956 Controller Attributes 00:10:12.956 128-bit Host Identifier: Not Supported 00:10:12.956 Non-Operational Permissive Mode: Not Supported 00:10:12.956 NVM Sets: Not Supported 00:10:12.956 Read Recovery Levels: Not Supported 00:10:12.956 Endurance Groups: Not Supported 00:10:12.956 Predictable Latency Mode: Not Supported 00:10:12.956 Traffic Based Keep ALive: Not Supported 00:10:12.956 Namespace Granularity: Not Supported 00:10:12.956 SQ Associations: Not Supported 00:10:12.956 UUID List: Not Supported 00:10:12.956 Multi-Domain Subsystem: Not Supported 00:10:12.956 Fixed Capacity Management: Not Supported 00:10:12.956 Variable Capacity Management: Not Supported 00:10:12.956 Delete Endurance Group: Not Supported 00:10:12.956 Delete NVM Set: Not Supported 00:10:12.956 Extended LBA Formats Supported: Supported 00:10:12.956 Flexible Data Placement Supported: Not Supported 00:10:12.956 00:10:12.956 Controller Memory Buffer Support 00:10:12.956 ================================ 00:10:12.956 Supported: No 00:10:12.956 00:10:12.956 Persistent Memory Region Support 00:10:12.956 ================================ 00:10:12.956 Supported: No 00:10:12.956 00:10:12.956 Admin Command Set Attributes 00:10:12.956 ============================ 00:10:12.956 Security Send/Receive: Not Supported 00:10:12.956 Format NVM: Supported 00:10:12.956 Firmware Activate/Download: Not Supported 00:10:12.956 Namespace Management: Supported 00:10:12.956 Device Self-Test: Not Supported 00:10:12.956 Directives: Supported 00:10:12.956 NVMe-MI: Not Supported 00:10:12.956 Virtualization Management: Not Supported 00:10:12.956 Doorbell Buffer Config: Supported 00:10:12.956 Get LBA Status Capability: Not Supported 00:10:12.956 Command & Feature Lockdown Capability: Not Supported 00:10:12.956 Abort Command Limit: 4 00:10:12.956 Async Event Request Limit: 4 00:10:12.956 Number of Firmware Slots: N/A 00:10:12.956 Firmware Slot 1 Read-Only: N/A 00:10:12.956 Firmware Activation Without Reset: N/A 00:10:12.956 Multiple Update Detection Support: N/A 00:10:12.956 Firmware Update Granularity: No Information Provided 00:10:12.956 Per-Namespace SMART Log: Yes 00:10:12.956 Asymmetric Namespace Access Log Page: Not Supported 00:10:12.956 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:10:12.956 Command Effects Log Page: Supported 00:10:12.956 Get Log Page Extended Data: Supported 00:10:12.956 Telemetry Log Pages: Not Supported 00:10:12.956 Persistent Event Log Pages: Not Supported 00:10:12.956 Supported Log Pages Log Page: May Support 00:10:12.956 Commands Supported & Effects Log Page: Not Supported 00:10:12.956 Feature Identifiers & Effects Log Page:May Support 00:10:12.956 NVMe-MI Commands & Effects Log Page: May Support 00:10:12.956 Data Area 4 for Telemetry Log: Not Supported 00:10:12.956 Error Log Page Entries Supported: 1 00:10:12.956 Keep Alive: Not Supported 00:10:12.956 00:10:12.956 NVM Command Set Attributes 00:10:12.956 ========================== 00:10:12.956 Submission Queue Entry Size 00:10:12.956 Max: 64 00:10:12.956 Min: 64 00:10:12.956 Completion Queue Entry Size 00:10:12.956 Max: 16 00:10:12.956 Min: 16 00:10:12.956 Number of Namespaces: 256 00:10:12.956 Compare Command: Supported 00:10:12.956 Write Uncorrectable Command: Not Supported 00:10:12.956 Dataset Management Command: Supported 00:10:12.956 Write Zeroes Command: Supported 00:10:12.956 Set Features Save Field: Supported 00:10:12.956 Reservations: Not Supported 00:10:12.956 Timestamp: Supported 00:10:12.956 Copy: Supported 00:10:12.956 Volatile Write Cache: Present 00:10:12.956 Atomic Write Unit (Normal): 1 00:10:12.956 Atomic Write Unit (PFail): 1 00:10:12.956 Atomic Compare & Write Unit: 1 00:10:12.956 Fused Compare & Write: Not Supported 00:10:12.956 Scatter-Gather List 00:10:12.956 SGL Command Set: Supported 00:10:12.956 SGL Keyed: Not Supported 00:10:12.956 SGL Bit Bucket Descriptor: Not Supported 00:10:12.956 SGL Metadata Pointer: Not Supported 00:10:12.956 Oversized SGL: Not Supported 00:10:12.956 SGL Metadata Address: Not Supported 00:10:12.956 SGL Offset: Not Supported 00:10:12.956 Transport SGL Data Block: Not Supported 00:10:12.956 Replay Protected Memory Block: Not Supported 00:10:12.957 00:10:12.957 Firmware Slot Information 00:10:12.957 ========================= 00:10:12.957 Active slot: 1 00:10:12.957 Slot 1 Firmware Revision: 1.0 00:10:12.957 00:10:12.957 00:10:12.957 Commands Supported and Effects 00:10:12.957 ============================== 00:10:12.957 Admin Commands 00:10:12.957 -------------- 00:10:12.957 Delete I/O Submission Queue (00h): Supported 00:10:12.957 Create I/O Submission Queue (01h): Supported 00:10:12.957 Get Log Page (02h): Supported 00:10:12.957 Delete I/O Completion Queue (04h): Supported 00:10:12.957 Create I/O Completion Queue (05h): Supported 00:10:12.957 Identify (06h): Supported 00:10:12.957 Abort (08h): Supported 00:10:12.957 Set Features (09h): Supported 00:10:12.957 Get Features (0Ah): Supported 00:10:12.957 Asynchronous Event Request (0Ch): Supported 00:10:12.957 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:12.957 Directive Send (19h): Supported 00:10:12.957 Directive Receive (1Ah): Supported 00:10:12.957 Virtualization Management (1Ch): Supported 00:10:12.957 Doorbell Buffer Config (7Ch): Supported 00:10:12.957 Format NVM (80h): Supported LBA-Change 00:10:12.957 I/O Commands 00:10:12.957 ------------ 00:10:12.957 Flush (00h): Supported LBA-Change 00:10:12.957 Write (01h): Supported LBA-Change 00:10:12.957 Read (02h): Supported 00:10:12.957 Compare (05h): Supported 00:10:12.957 Write Zeroes (08h): Supported LBA-Change 00:10:12.957 Dataset Management (09h): Supported LBA-Change 00:10:12.957 Unknown (0Ch): Supported 00:10:12.957 Unknown (12h): Supported 00:10:12.957 Copy (19h): Supported LBA-Change 00:10:12.957 Unknown (1Dh): Supported LBA-Change 00:10:12.957 00:10:12.957 Error Log 00:10:12.957 ========= 00:10:12.957 00:10:12.957 Arbitration 00:10:12.957 =========== 00:10:12.957 Arbitration Burst: no limit 00:10:12.957 00:10:12.957 Power Management 00:10:12.957 ================ 00:10:12.957 Number of Power States: 1 00:10:12.957 Current Power State: Power State #0 00:10:12.957 Power State #0: 00:10:12.957 Max Power: 25.00 W 00:10:12.957 Non-Operational State: Operational 00:10:12.957 Entry Latency: 16 microseconds 00:10:12.957 Exit Latency: 4 microseconds 00:10:12.957 Relative Read Throughput: 0 00:10:12.957 Relative Read Latency: 0 00:10:12.957 Relative Write Throughput: 0 00:10:12.957 Relative Write Latency: 0 00:10:12.957 Idle Power[2024-07-23 18:28:12.872353] nvme_ctrlr.c:3486:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0] process 80554 terminated unexpected 00:10:12.957 : Not Reported 00:10:12.957 Active Power: Not Reported 00:10:12.957 Non-Operational Permissive Mode: Not Supported 00:10:12.957 00:10:12.957 Health Information 00:10:12.957 ================== 00:10:12.957 Critical Warnings: 00:10:12.957 Available Spare Space: OK 00:10:12.957 Temperature: OK 00:10:12.957 Device Reliability: OK 00:10:12.957 Read Only: No 00:10:12.957 Volatile Memory Backup: OK 00:10:12.957 Current Temperature: 323 Kelvin (50 Celsius) 00:10:12.957 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:12.957 Available Spare: 0% 00:10:12.957 Available Spare Threshold: 0% 00:10:12.957 Life Percentage Used: 0% 00:10:12.957 Data Units Read: 702 00:10:12.957 Data Units Written: 593 00:10:12.957 Host Read Commands: 31523 00:10:12.957 Host Write Commands: 30561 00:10:12.957 Controller Busy Time: 0 minutes 00:10:12.957 Power Cycles: 0 00:10:12.957 Power On Hours: 0 hours 00:10:12.957 Unsafe Shutdowns: 0 00:10:12.957 Unrecoverable Media Errors: 0 00:10:12.957 Lifetime Error Log Entries: 0 00:10:12.957 Warning Temperature Time: 0 minutes 00:10:12.957 Critical Temperature Time: 0 minutes 00:10:12.957 00:10:12.957 Number of Queues 00:10:12.957 ================ 00:10:12.957 Number of I/O Submission Queues: 64 00:10:12.957 Number of I/O Completion Queues: 64 00:10:12.957 00:10:12.957 ZNS Specific Controller Data 00:10:12.957 ============================ 00:10:12.957 Zone Append Size Limit: 0 00:10:12.957 00:10:12.957 00:10:12.957 Active Namespaces 00:10:12.957 ================= 00:10:12.957 Namespace ID:1 00:10:12.957 Error Recovery Timeout: Unlimited 00:10:12.957 Command Set Identifier: NVM (00h) 00:10:12.957 Deallocate: Supported 00:10:12.957 Deallocated/Unwritten Error: Supported 00:10:12.957 Deallocated Read Value: All 0x00 00:10:12.957 Deallocate in Write Zeroes: Not Supported 00:10:12.957 Deallocated Guard Field: 0xFFFF 00:10:12.957 Flush: Supported 00:10:12.957 Reservation: Not Supported 00:10:12.957 Metadata Transferred as: Separate Metadata Buffer 00:10:12.957 Namespace Sharing Capabilities: Private 00:10:12.957 Size (in LBAs): 1548666 (5GiB) 00:10:12.957 Capacity (in LBAs): 1548666 (5GiB) 00:10:12.957 Utilization (in LBAs): 1548666 (5GiB) 00:10:12.957 Thin Provisioning: Not Supported 00:10:12.957 Per-NS Atomic Units: No 00:10:12.957 Maximum Single Source Range Length: 128 00:10:12.957 Maximum Copy Length: 128 00:10:12.957 Maximum Source Range Count: 128 00:10:12.957 NGUID/EUI64 Never Reused: No 00:10:12.957 Namespace Write Protected: No 00:10:12.957 Number of LBA Formats: 8 00:10:12.957 Current LBA Format: LBA Format #07 00:10:12.957 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:12.957 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:12.957 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:12.957 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:12.957 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:12.957 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:12.957 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:12.957 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:12.957 00:10:12.957 ===================================================== 00:10:12.957 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:12.957 ===================================================== 00:10:12.957 Controller Capabilities/Features 00:10:12.957 ================================ 00:10:12.957 Vendor ID: 1b36 00:10:12.957 Subsystem Vendor ID: 1af4 00:10:12.957 Serial Number: 12341 00:10:12.957 Model Number: QEMU NVMe Ctrl 00:10:12.957 Firmware Version: 8.0.0 00:10:12.957 Recommended Arb Burst: 6 00:10:12.957 IEEE OUI Identifier: 00 54 52 00:10:12.957 Multi-path I/O 00:10:12.957 May have multiple subsystem ports: No 00:10:12.957 May have multiple controllers: No 00:10:12.957 Associated with SR-IOV VF: No 00:10:12.957 Max Data Transfer Size: 524288 00:10:12.957 Max Number of Namespaces: 256 00:10:12.957 Max Number of I/O Queues: 64 00:10:12.957 NVMe Specification Version (VS): 1.4 00:10:12.957 NVMe Specification Version (Identify): 1.4 00:10:12.957 Maximum Queue Entries: 2048 00:10:12.957 Contiguous Queues Required: Yes 00:10:12.957 Arbitration Mechanisms Supported 00:10:12.957 Weighted Round Robin: Not Supported 00:10:12.957 Vendor Specific: Not Supported 00:10:12.957 Reset Timeout: 7500 ms 00:10:12.957 Doorbell Stride: 4 bytes 00:10:12.957 NVM Subsystem Reset: Not Supported 00:10:12.957 Command Sets Supported 00:10:12.957 NVM Command Set: Supported 00:10:12.957 Boot Partition: Not Supported 00:10:12.957 Memory Page Size Minimum: 4096 bytes 00:10:12.957 Memory Page Size Maximum: 65536 bytes 00:10:12.957 Persistent Memory Region: Not Supported 00:10:12.957 Optional Asynchronous Events Supported 00:10:12.957 Namespace Attribute Notices: Supported 00:10:12.957 Firmware Activation Notices: Not Supported 00:10:12.957 ANA Change Notices: Not Supported 00:10:12.957 PLE Aggregate Log Change Notices: Not Supported 00:10:12.957 LBA Status Info Alert Notices: Not Supported 00:10:12.957 EGE Aggregate Log Change Notices: Not Supported 00:10:12.957 Normal NVM Subsystem Shutdown event: Not Supported 00:10:12.957 Zone Descriptor Change Notices: Not Supported 00:10:12.957 Discovery Log Change Notices: Not Supported 00:10:12.957 Controller Attributes 00:10:12.957 128-bit Host Identifier: Not Supported 00:10:12.957 Non-Operational Permissive Mode: Not Supported 00:10:12.957 NVM Sets: Not Supported 00:10:12.957 Read Recovery Levels: Not Supported 00:10:12.957 Endurance Groups: Not Supported 00:10:12.957 Predictable Latency Mode: Not Supported 00:10:12.957 Traffic Based Keep ALive: Not Supported 00:10:12.957 Namespace Granularity: Not Supported 00:10:12.957 SQ Associations: Not Supported 00:10:12.957 UUID List: Not Supported 00:10:12.957 Multi-Domain Subsystem: Not Supported 00:10:12.957 Fixed Capacity Management: Not Supported 00:10:12.957 Variable Capacity Management: Not Supported 00:10:12.958 Delete Endurance Group: Not Supported 00:10:12.958 Delete NVM Set: Not Supported 00:10:12.958 Extended LBA Formats Supported: Supported 00:10:12.958 Flexible Data Placement Supported: Not Supported 00:10:12.958 00:10:12.958 Controller Memory Buffer Support 00:10:12.958 ================================ 00:10:12.958 Supported: No 00:10:12.958 00:10:12.958 Persistent Memory Region Support 00:10:12.958 ================================ 00:10:12.958 Supported: No 00:10:12.958 00:10:12.958 Admin Command Set Attributes 00:10:12.958 ============================ 00:10:12.958 Security Send/Receive: Not Supported 00:10:12.958 Format NVM: Supported 00:10:12.958 Firmware Activate/Download: Not Supported 00:10:12.958 Namespace Management: Supported 00:10:12.958 Device Self-Test: Not Supported 00:10:12.958 Directives: Supported 00:10:12.958 NVMe-MI: Not Supported 00:10:12.958 Virtualization Management: Not Supported 00:10:12.958 Doorbell Buffer Config: Supported 00:10:12.958 Get LBA Status Capability: Not Supported 00:10:12.958 Command & Feature Lockdown Capability: Not Supported 00:10:12.958 Abort Command Limit: 4 00:10:12.958 Async Event Request Limit: 4 00:10:12.958 Number of Firmware Slots: N/A 00:10:12.958 Firmware Slot 1 Read-Only: N/A 00:10:12.958 Firmware Activation Without Reset: N/A 00:10:12.958 Multiple Update Detection Support: N/A 00:10:12.958 Firmware Update Granularity: No Information Provided 00:10:12.958 Per-Namespace SMART Log: Yes 00:10:12.958 Asymmetric Namespace Access Log Page: Not Supported 00:10:12.958 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:10:12.958 Command Effects Log Page: Supported 00:10:12.958 Get Log Page Extended Data: Supported 00:10:12.958 Telemetry Log Pages: Not Supported 00:10:12.958 Persistent Event Log Pages: Not Supported 00:10:12.958 Supported Log Pages Log Page: May Support 00:10:12.958 Commands Supported & Effects Log Page: Not Supported 00:10:12.958 Feature Identifiers & Effects Log Page:May Support 00:10:12.958 NVMe-MI Commands & Effects Log Page: May Support 00:10:12.958 Data Area 4 for Telemetry Log: Not Supported 00:10:12.958 Error Log Page Entries Supported: 1 00:10:12.958 Keep Alive: Not Supported 00:10:12.958 00:10:12.958 NVM Command Set Attributes 00:10:12.958 ========================== 00:10:12.958 Submission Queue Entry Size 00:10:12.958 Max: 64 00:10:12.958 Min: 64 00:10:12.958 Completion Queue Entry Size 00:10:12.958 Max: 16 00:10:12.958 Min: 16 00:10:12.958 Number of Namespaces: 256 00:10:12.958 Compare Command: Supported 00:10:12.958 Write Uncorrectable Command: Not Supported 00:10:12.958 Dataset Management Command: Supported 00:10:12.958 Write Zeroes Command: Supported 00:10:12.958 Set Features Save Field: Supported 00:10:12.958 Reservations: Not Supported 00:10:12.958 Timestamp: Supported 00:10:12.958 Copy: Supported 00:10:12.958 Volatile Write Cache: Present 00:10:12.958 Atomic Write Unit (Normal): 1 00:10:12.958 Atomic Write Unit (PFail): 1 00:10:12.958 Atomic Compare & Write Unit: 1 00:10:12.958 Fused Compare & Write: Not Supported 00:10:12.958 Scatter-Gather List 00:10:12.958 SGL Command Set: Supported 00:10:12.958 SGL Keyed: Not Supported 00:10:12.958 SGL Bit Bucket Descriptor: Not Supported 00:10:12.958 SGL Metadata Pointer: Not Supported 00:10:12.958 Oversized SGL: Not Supported 00:10:12.958 SGL Metadata Address: Not Supported 00:10:12.958 SGL Offset: Not Supported 00:10:12.958 Transport SGL Data Block: Not Supported 00:10:12.958 Replay Protected Memory Block: Not Supported 00:10:12.958 00:10:12.958 Firmware Slot Information 00:10:12.958 ========================= 00:10:12.958 Active slot: 1 00:10:12.958 Slot 1 Firmware Revision: 1.0 00:10:12.958 00:10:12.958 00:10:12.958 Commands Supported and Effects 00:10:12.958 ============================== 00:10:12.958 Admin Commands 00:10:12.958 -------------- 00:10:12.958 Delete I/O Submission Queue (00h): Supported 00:10:12.958 Create I/O Submission Queue (01h): Supported 00:10:12.958 Get Log Page (02h): Supported 00:10:12.958 Delete I/O Completion Queue (04h): Supported 00:10:12.958 Create I/O Completion Queue (05h): Supported 00:10:12.958 Identify (06h): Supported 00:10:12.958 Abort (08h): Supported 00:10:12.958 Set Features (09h): Supported 00:10:12.958 Get Features (0Ah): Supported 00:10:12.958 Asynchronous Event Request (0Ch): Supported 00:10:12.958 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:12.958 Directive Send (19h): Supported 00:10:12.958 Directive Receive (1Ah): Supported 00:10:12.958 Virtualization Management (1Ch): Supported 00:10:12.958 Doorbell Buffer Config (7Ch): Supported 00:10:12.958 Format NVM (80h): Supported LBA-Change 00:10:12.958 I/O Commands 00:10:12.958 ------------ 00:10:12.958 Flush (00h): Supported LBA-Change 00:10:12.958 Write (01h): Supported LBA-Change 00:10:12.958 Read (02h): Supported 00:10:12.958 Compare (05h): Supported 00:10:12.958 Write Zeroes (08h): Supported LBA-Change 00:10:12.958 Dataset Management (09h): Supported LBA-Change 00:10:12.958 Unknown (0Ch): Supported 00:10:12.958 Unknown (12h): Supported 00:10:12.958 Copy (19h): Supported LBA-Change 00:10:12.958 Unknown (1Dh): Supported LBA-Change 00:10:12.958 00:10:12.958 Error Log 00:10:12.958 ========= 00:10:12.958 00:10:12.958 Arbitration 00:10:12.958 =========== 00:10:12.958 Arbitration Burst: no limit 00:10:12.958 00:10:12.958 Power Management 00:10:12.958 ================ 00:10:12.958 Number of Power States: 1 00:10:12.958 Current Power State: Power State #0 00:10:12.958 Power State #0: 00:10:12.958 Max Power: 25.00 W 00:10:12.958 Non-Operational State: Operational 00:10:12.958 Entry Latency: 16 microseconds 00:10:12.958 Exit Latency: 4 microseconds 00:10:12.958 Relative Read Throughput: 0 00:10:12.958 Relative Read Latency: 0 00:10:12.958 Relative Write Throughput: 0 00:10:12.958 Relative Write Latency: 0 00:10:12.958 Idle Power: Not Reported 00:10:12.958 Active Power: Not Reported 00:10:12.958 Non-Operational Permissive Mode: Not Supported 00:10:12.958 00:10:12.958 Health Information 00:10:12.958 ================== 00:10:12.958 Critical Warnings: 00:10:12.958 Available Spare Space: OK 00:10:12.958 Temperature: OK 00:10:12.958 Device Reliability: OK 00:10:12.958 Read Only: No 00:10:12.958 Volatile Memory Backup: OK 00:10:12.958 Current Temperature: 323 Kelvin (50 Celsius) 00:10:12.958 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:12.958 Available Spare: 0% 00:10:12.958 Available Spare Threshold: 0% 00:10:12.958 Life Percentage Used: 0% 00:10:12.958 Data Units Read: 1155 00:10:12.958 Data Units Written: 937 00:10:12.958 Host Read Commands: 47718 00:10:12.958 Host Write Commands: 44703 00:10:12.958 Controller Busy Time: 0 minutes 00:10:12.958 Power Cycles: 0 00:10:12.958 Power On Hours: 0 hours 00:10:12.958 Unsafe Shutdowns: 0 00:10:12.958 Unrecoverable Media Errors: 0 00:10:12.958 Lifetime Error Log Entries: 0 00:10:12.958 Warning Temperature Time: 0 minutes 00:10:12.958 Critical Temperature Time: 0 minutes 00:10:12.958 00:10:12.958 Number of Queues 00:10:12.958 ================ 00:10:12.958 Number of I/O Submission Queues: 64 00:10:12.958 Number of I/O Completion Queues: 64 00:10:12.958 00:10:12.958 ZNS Specific Controller Data 00:10:12.958 ============================ 00:10:12.958 Zone Append Size Limit: 0 00:10:12.958 00:10:12.958 00:10:12.958 Active Namespaces 00:10:12.958 ================= 00:10:12.958 Namespace ID:1 00:10:12.958 Error Recovery Timeout: Unlimited 00:10:12.958 Command Set Identifier: [2024-07-23 18:28:12.872998] nvme_ctrlr.c:3486:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0] process 80554 terminated unexpected 00:10:12.958 NVM (00h) 00:10:12.958 Deallocate: Supported 00:10:12.958 Deallocated/Unwritten Error: Supported 00:10:12.958 Deallocated Read Value: All 0x00 00:10:12.958 Deallocate in Write Zeroes: Not Supported 00:10:12.958 Deallocated Guard Field: 0xFFFF 00:10:12.958 Flush: Supported 00:10:12.958 Reservation: Not Supported 00:10:12.958 Namespace Sharing Capabilities: Private 00:10:12.958 Size (in LBAs): 1310720 (5GiB) 00:10:12.958 Capacity (in LBAs): 1310720 (5GiB) 00:10:12.958 Utilization (in LBAs): 1310720 (5GiB) 00:10:12.958 Thin Provisioning: Not Supported 00:10:12.958 Per-NS Atomic Units: No 00:10:12.958 Maximum Single Source Range Length: 128 00:10:12.958 Maximum Copy Length: 128 00:10:12.958 Maximum Source Range Count: 128 00:10:12.958 NGUID/EUI64 Never Reused: No 00:10:12.958 Namespace Write Protected: No 00:10:12.958 Number of LBA Formats: 8 00:10:12.958 Current LBA Format: LBA Format #04 00:10:12.959 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:12.959 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:12.959 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:12.959 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:12.959 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:12.959 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:12.959 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:12.959 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:12.959 00:10:12.959 ===================================================== 00:10:12.959 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:12.959 ===================================================== 00:10:12.959 Controller Capabilities/Features 00:10:12.959 ================================ 00:10:12.959 Vendor ID: 1b36 00:10:12.959 Subsystem Vendor ID: 1af4 00:10:12.959 Serial Number: 12343 00:10:12.959 Model Number: QEMU NVMe Ctrl 00:10:12.959 Firmware Version: 8.0.0 00:10:12.959 Recommended Arb Burst: 6 00:10:12.959 IEEE OUI Identifier: 00 54 52 00:10:12.959 Multi-path I/O 00:10:12.959 May have multiple subsystem ports: No 00:10:12.959 May have multiple controllers: Yes 00:10:12.959 Associated with SR-IOV VF: No 00:10:12.959 Max Data Transfer Size: 524288 00:10:12.959 Max Number of Namespaces: 256 00:10:12.959 Max Number of I/O Queues: 64 00:10:12.959 NVMe Specification Version (VS): 1.4 00:10:12.959 NVMe Specification Version (Identify): 1.4 00:10:12.959 Maximum Queue Entries: 2048 00:10:12.959 Contiguous Queues Required: Yes 00:10:12.959 Arbitration Mechanisms Supported 00:10:12.959 Weighted Round Robin: Not Supported 00:10:12.959 Vendor Specific: Not Supported 00:10:12.959 Reset Timeout: 7500 ms 00:10:12.959 Doorbell Stride: 4 bytes 00:10:12.959 NVM Subsystem Reset: Not Supported 00:10:12.959 Command Sets Supported 00:10:12.959 NVM Command Set: Supported 00:10:12.959 Boot Partition: Not Supported 00:10:12.959 Memory Page Size Minimum: 4096 bytes 00:10:12.959 Memory Page Size Maximum: 65536 bytes 00:10:12.959 Persistent Memory Region: Not Supported 00:10:12.959 Optional Asynchronous Events Supported 00:10:12.959 Namespace Attribute Notices: Supported 00:10:12.959 Firmware Activation Notices: Not Supported 00:10:12.959 ANA Change Notices: Not Supported 00:10:12.959 PLE Aggregate Log Change Notices: Not Supported 00:10:12.959 LBA Status Info Alert Notices: Not Supported 00:10:12.959 EGE Aggregate Log Change Notices: Not Supported 00:10:12.959 Normal NVM Subsystem Shutdown event: Not Supported 00:10:12.959 Zone Descriptor Change Notices: Not Supported 00:10:12.959 Discovery Log Change Notices: Not Supported 00:10:12.959 Controller Attributes 00:10:12.959 128-bit Host Identifier: Not Supported 00:10:12.959 Non-Operational Permissive Mode: Not Supported 00:10:12.959 NVM Sets: Not Supported 00:10:12.959 Read Recovery Levels: Not Supported 00:10:12.959 Endurance Groups: Supported 00:10:12.959 Predictable Latency Mode: Not Supported 00:10:12.959 Traffic Based Keep ALive: Not Supported 00:10:12.959 Namespace Granularity: Not Supported 00:10:12.959 SQ Associations: Not Supported 00:10:12.959 UUID List: Not Supported 00:10:12.959 Multi-Domain Subsystem: Not Supported 00:10:12.959 Fixed Capacity Management: Not Supported 00:10:12.959 Variable Capacity Management: Not Supported 00:10:12.959 Delete Endurance Group: Not Supported 00:10:12.959 Delete NVM Set: Not Supported 00:10:12.959 Extended LBA Formats Supported: Supported 00:10:12.959 Flexible Data Placement Supported: Supported 00:10:12.959 00:10:12.959 Controller Memory Buffer Support 00:10:12.959 ================================ 00:10:12.959 Supported: No 00:10:12.959 00:10:12.959 Persistent Memory Region Support 00:10:12.959 ================================ 00:10:12.959 Supported: No 00:10:12.959 00:10:12.959 Admin Command Set Attributes 00:10:12.959 ============================ 00:10:12.959 Security Send/Receive: Not Supported 00:10:12.959 Format NVM: Supported 00:10:12.959 Firmware Activate/Download: Not Supported 00:10:12.959 Namespace Management: Supported 00:10:12.959 Device Self-Test: Not Supported 00:10:12.959 Directives: Supported 00:10:12.959 NVMe-MI: Not Supported 00:10:12.959 Virtualization Management: Not Supported 00:10:12.959 Doorbell Buffer Config: Supported 00:10:12.959 Get LBA Status Capability: Not Supported 00:10:12.959 Command & Feature Lockdown Capability: Not Supported 00:10:12.959 Abort Command Limit: 4 00:10:12.959 Async Event Request Limit: 4 00:10:12.959 Number of Firmware Slots: N/A 00:10:12.959 Firmware Slot 1 Read-Only: N/A 00:10:12.959 Firmware Activation Without Reset: N/A 00:10:12.959 Multiple Update Detection Support: N/A 00:10:12.959 Firmware Update Granularity: No Information Provided 00:10:12.959 Per-Namespace SMART Log: Yes 00:10:12.959 Asymmetric Namespace Access Log Page: Not Supported 00:10:12.959 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:10:12.959 Command Effects Log Page: Supported 00:10:12.959 Get Log Page Extended Data: Supported 00:10:12.959 Telemetry Log Pages: Not Supported 00:10:12.959 Persistent Event Log Pages: Not Supported 00:10:12.959 Supported Log Pages Log Page: May Support 00:10:12.959 Commands Supported & Effects Log Page: Not Supported 00:10:12.959 Feature Identifiers & Effects Log Page:May Support 00:10:12.959 NVMe-MI Commands & Effects Log Page: May Support 00:10:12.959 Data Area 4 for Telemetry Log: Not Supported 00:10:12.959 Error Log Page Entries Supported: 1 00:10:12.959 Keep Alive: Not Supported 00:10:12.959 00:10:12.959 NVM Command Set Attributes 00:10:12.959 ========================== 00:10:12.959 Submission Queue Entry Size 00:10:12.959 Max: 64 00:10:12.959 Min: 64 00:10:12.959 Completion Queue Entry Size 00:10:12.959 Max: 16 00:10:12.959 Min: 16 00:10:12.959 Number of Namespaces: 256 00:10:12.959 Compare Command: Supported 00:10:12.959 Write Uncorrectable Command: Not Supported 00:10:12.959 Dataset Management Command: Supported 00:10:12.959 Write Zeroes Command: Supported 00:10:12.959 Set Features Save Field: Supported 00:10:12.959 Reservations: Not Supported 00:10:12.959 Timestamp: Supported 00:10:12.959 Copy: Supported 00:10:12.959 Volatile Write Cache: Present 00:10:12.959 Atomic Write Unit (Normal): 1 00:10:12.959 Atomic Write Unit (PFail): 1 00:10:12.959 Atomic Compare & Write Unit: 1 00:10:12.959 Fused Compare & Write: Not Supported 00:10:12.959 Scatter-Gather List 00:10:12.959 SGL Command Set: Supported 00:10:12.959 SGL Keyed: Not Supported 00:10:12.959 SGL Bit Bucket Descriptor: Not Supported 00:10:12.959 SGL Metadata Pointer: Not Supported 00:10:12.959 Oversized SGL: Not Supported 00:10:12.959 SGL Metadata Address: Not Supported 00:10:12.959 SGL Offset: Not Supported 00:10:12.959 Transport SGL Data Block: Not Supported 00:10:12.959 Replay Protected Memory Block: Not Supported 00:10:12.959 00:10:12.959 Firmware Slot Information 00:10:12.959 ========================= 00:10:12.959 Active slot: 1 00:10:12.959 Slot 1 Firmware Revision: 1.0 00:10:12.959 00:10:12.959 00:10:12.959 Commands Supported and Effects 00:10:12.959 ============================== 00:10:12.959 Admin Commands 00:10:12.959 -------------- 00:10:12.959 Delete I/O Submission Queue (00h): Supported 00:10:12.959 Create I/O Submission Queue (01h): Supported 00:10:12.959 Get Log Page (02h): Supported 00:10:12.959 Delete I/O Completion Queue (04h): Supported 00:10:12.959 Create I/O Completion Queue (05h): Supported 00:10:12.959 Identify (06h): Supported 00:10:12.959 Abort (08h): Supported 00:10:12.959 Set Features (09h): Supported 00:10:12.959 Get Features (0Ah): Supported 00:10:12.959 Asynchronous Event Request (0Ch): Supported 00:10:12.959 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:12.959 Directive Send (19h): Supported 00:10:12.959 Directive Receive (1Ah): Supported 00:10:12.959 Virtualization Management (1Ch): Supported 00:10:12.959 Doorbell Buffer Config (7Ch): Supported 00:10:12.959 Format NVM (80h): Supported LBA-Change 00:10:12.959 I/O Commands 00:10:12.959 ------------ 00:10:12.959 Flush (00h): Supported LBA-Change 00:10:12.959 Write (01h): Supported LBA-Change 00:10:12.959 Read (02h): Supported 00:10:12.959 Compare (05h): Supported 00:10:12.959 Write Zeroes (08h): Supported LBA-Change 00:10:12.959 Dataset Management (09h): Supported LBA-Change 00:10:12.959 Unknown (0Ch): Supported 00:10:12.959 Unknown (12h): Supported 00:10:12.959 Copy (19h): Supported LBA-Change 00:10:12.959 Unknown (1Dh): Supported LBA-Change 00:10:12.959 00:10:12.959 Error Log 00:10:12.959 ========= 00:10:12.959 00:10:12.959 Arbitration 00:10:12.959 =========== 00:10:12.959 Arbitration Burst: no limit 00:10:12.959 00:10:12.959 Power Management 00:10:12.959 ================ 00:10:12.959 Number of Power States: 1 00:10:12.960 Current Power State: Power State #0 00:10:12.960 Power State #0: 00:10:12.960 Max Power: 25.00 W 00:10:12.960 Non-Operational State: Operational 00:10:12.960 Entry Latency: 16 microseconds 00:10:12.960 Exit Latency: 4 microseconds 00:10:12.960 Relative Read Throughput: 0 00:10:12.960 Relative Read Latency: 0 00:10:12.960 Relative Write Throughput: 0 00:10:12.960 Relative Write Latency: 0 00:10:12.960 Idle Power: Not Reported 00:10:12.960 Active Power: Not Reported 00:10:12.960 Non-Operational Permissive Mode: Not Supported 00:10:12.960 00:10:12.960 Health Information 00:10:12.960 ================== 00:10:12.960 Critical Warnings: 00:10:12.960 Available Spare Space: OK 00:10:12.960 Temperature: OK 00:10:12.960 Device Reliability: OK 00:10:12.960 Read Only: No 00:10:12.960 Volatile Memory Backup: OK 00:10:12.960 Current Temperature: 323 Kelvin (50 Celsius) 00:10:12.960 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:12.960 Available Spare: 0% 00:10:12.960 Available Spare Threshold: 0% 00:10:12.960 Life Percentage Used: 0% 00:10:12.960 Data Units Read: 1049 00:10:12.960 Data Units Written: 942 00:10:12.960 Host Read Commands: 34446 00:10:12.960 Host Write Commands: 33036 00:10:12.960 Controller Busy Time: 0 minutes 00:10:12.960 Power Cycles: 0 00:10:12.960 Power On Hours: 0 hours 00:10:12.960 Unsafe Shutdowns: 0 00:10:12.960 Unrecoverable Media Errors: 0 00:10:12.960 Lifetime Error Log Entries: 0 00:10:12.960 Warning Temperature Time: 0 minutes 00:10:12.960 Critical Temperature Time: 0 minutes 00:10:12.960 00:10:12.960 Number of Queues 00:10:12.960 ================ 00:10:12.960 Number of I/O Submission Queues: 64 00:10:12.960 Number of I/O Completion Queues: 64 00:10:12.960 00:10:12.960 ZNS Specific Controller Data 00:10:12.960 ============================ 00:10:12.960 Zone Append Size Limit: 0 00:10:12.960 00:10:12.960 00:10:12.960 Active Namespaces 00:10:12.960 ================= 00:10:12.960 Namespace ID:1 00:10:12.960 Error Recovery Timeout: Unlimited 00:10:12.960 Command Set Identifier: NVM (00h) 00:10:12.960 Deallocate: Supported 00:10:12.960 Deallocated/Unwritten Error: Supported 00:10:12.960 Deallocated Read Value: All 0x00 00:10:12.960 Deallocate in Write Zeroes: Not Supported 00:10:12.960 Deallocated Guard Field: 0xFFFF 00:10:12.960 Flush: Supported 00:10:12.960 Reservation: Not Supported 00:10:12.960 Namespace Sharing Capabilities: Multiple Controllers 00:10:12.960 Size (in LBAs): 262144 (1GiB) 00:10:12.960 Capacity (in LBAs): 262144 (1GiB) 00:10:12.960 Utilization (in LBAs): 262144 (1GiB) 00:10:12.960 Thin Provisioning: Not Supported 00:10:12.960 Per-NS Atomic Units: No 00:10:12.960 Maximum Single Source Range Length: 128 00:10:12.960 Maximum Copy Length: 128 00:10:12.960 Maximum Source Range Count: 128 00:10:12.960 NGUID/EUI64 Never Reused: No 00:10:12.960 Namespace Write Protected: No 00:10:12.960 Endurance group ID: 1 00:10:12.960 Number of LBA Formats: 8 00:10:12.960 Current LBA Format: LBA Format #04 00:10:12.960 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:12.960 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:12.960 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:12.960 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:12.960 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:12.960 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:12.960 LBA Format #06: Data Siz[2024-07-23 18:28:12.874145] nvme_ctrlr.c:3486:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0] process 80554 terminated unexpected 00:10:12.960 e: 4096 Metadata Size: 16 00:10:12.960 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:12.960 00:10:12.960 Get Feature FDP: 00:10:12.960 ================ 00:10:12.960 Enabled: Yes 00:10:12.960 FDP configuration index: 0 00:10:12.960 00:10:12.960 FDP configurations log page 00:10:12.960 =========================== 00:10:12.960 Number of FDP configurations: 1 00:10:12.960 Version: 0 00:10:12.960 Size: 112 00:10:12.960 FDP Configuration Descriptor: 0 00:10:12.960 Descriptor Size: 96 00:10:12.960 Reclaim Group Identifier format: 2 00:10:12.960 FDP Volatile Write Cache: Not Present 00:10:12.960 FDP Configuration: Valid 00:10:12.960 Vendor Specific Size: 0 00:10:12.960 Number of Reclaim Groups: 2 00:10:12.960 Number of Recalim Unit Handles: 8 00:10:12.960 Max Placement Identifiers: 128 00:10:12.960 Number of Namespaces Suppprted: 256 00:10:12.960 Reclaim unit Nominal Size: 6000000 bytes 00:10:12.960 Estimated Reclaim Unit Time Limit: Not Reported 00:10:12.960 RUH Desc #000: RUH Type: Initially Isolated 00:10:12.960 RUH Desc #001: RUH Type: Initially Isolated 00:10:12.960 RUH Desc #002: RUH Type: Initially Isolated 00:10:12.960 RUH Desc #003: RUH Type: Initially Isolated 00:10:12.960 RUH Desc #004: RUH Type: Initially Isolated 00:10:12.960 RUH Desc #005: RUH Type: Initially Isolated 00:10:12.960 RUH Desc #006: RUH Type: Initially Isolated 00:10:12.960 RUH Desc #007: RUH Type: Initially Isolated 00:10:12.960 00:10:12.960 FDP reclaim unit handle usage log page 00:10:12.960 ====================================== 00:10:12.960 Number of Reclaim Unit Handles: 8 00:10:12.960 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:10:12.960 RUH Usage Desc #001: RUH Attributes: Unused 00:10:12.960 RUH Usage Desc #002: RUH Attributes: Unused 00:10:12.960 RUH Usage Desc #003: RUH Attributes: Unused 00:10:12.960 RUH Usage Desc #004: RUH Attributes: Unused 00:10:12.960 RUH Usage Desc #005: RUH Attributes: Unused 00:10:12.960 RUH Usage Desc #006: RUH Attributes: Unused 00:10:12.960 RUH Usage Desc #007: RUH Attributes: Unused 00:10:12.960 00:10:12.960 FDP statistics log page 00:10:12.960 ======================= 00:10:12.960 Host bytes with metadata written: 574464000 00:10:12.960 Media bytes with metadata written: 575852544 00:10:12.960 Media bytes erased: 0 00:10:12.960 00:10:12.960 FDP events log page 00:10:12.960 =================== 00:10:12.960 Number of FDP events: 0 00:10:12.960 00:10:12.960 ===================================================== 00:10:12.960 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:12.960 ===================================================== 00:10:12.960 Controller Capabilities/Features 00:10:12.960 ================================ 00:10:12.960 Vendor ID: 1b36 00:10:12.960 Subsystem Vendor ID: 1af4 00:10:12.960 Serial Number: 12342 00:10:12.960 Model Number: QEMU NVMe Ctrl 00:10:12.960 Firmware Version: 8.0.0 00:10:12.960 Recommended Arb Burst: 6 00:10:12.960 IEEE OUI Identifier: 00 54 52 00:10:12.960 Multi-path I/O 00:10:12.960 May have multiple subsystem ports: No 00:10:12.960 May have multiple controllers: No 00:10:12.960 Associated with SR-IOV VF: No 00:10:12.960 Max Data Transfer Size: 524288 00:10:12.960 Max Number of Namespaces: 256 00:10:12.960 Max Number of I/O Queues: 64 00:10:12.960 NVMe Specification Version (VS): 1.4 00:10:12.960 NVMe Specification Version (Identify): 1.4 00:10:12.960 Maximum Queue Entries: 2048 00:10:12.960 Contiguous Queues Required: Yes 00:10:12.960 Arbitration Mechanisms Supported 00:10:12.960 Weighted Round Robin: Not Supported 00:10:12.960 Vendor Specific: Not Supported 00:10:12.960 Reset Timeout: 7500 ms 00:10:12.960 Doorbell Stride: 4 bytes 00:10:12.960 NVM Subsystem Reset: Not Supported 00:10:12.960 Command Sets Supported 00:10:12.960 NVM Command Set: Supported 00:10:12.960 Boot Partition: Not Supported 00:10:12.960 Memory Page Size Minimum: 4096 bytes 00:10:12.960 Memory Page Size Maximum: 65536 bytes 00:10:12.960 Persistent Memory Region: Not Supported 00:10:12.960 Optional Asynchronous Events Supported 00:10:12.960 Namespace Attribute Notices: Supported 00:10:12.960 Firmware Activation Notices: Not Supported 00:10:12.960 ANA Change Notices: Not Supported 00:10:12.960 PLE Aggregate Log Change Notices: Not Supported 00:10:12.960 LBA Status Info Alert Notices: Not Supported 00:10:12.960 EGE Aggregate Log Change Notices: Not Supported 00:10:12.960 Normal NVM Subsystem Shutdown event: Not Supported 00:10:12.960 Zone Descriptor Change Notices: Not Supported 00:10:12.960 Discovery Log Change Notices: Not Supported 00:10:12.960 Controller Attributes 00:10:12.960 128-bit Host Identifier: Not Supported 00:10:12.960 Non-Operational Permissive Mode: Not Supported 00:10:12.960 NVM Sets: Not Supported 00:10:12.960 Read Recovery Levels: Not Supported 00:10:12.960 Endurance Groups: Not Supported 00:10:12.960 Predictable Latency Mode: Not Supported 00:10:12.960 Traffic Based Keep ALive: Not Supported 00:10:12.960 Namespace Granularity: Not Supported 00:10:12.960 SQ Associations: Not Supported 00:10:12.960 UUID List: Not Supported 00:10:12.960 Multi-Domain Subsystem: Not Supported 00:10:12.960 Fixed Capacity Management: Not Supported 00:10:12.960 Variable Capacity Management: Not Supported 00:10:12.960 Delete Endurance Group: Not Supported 00:10:12.961 Delete NVM Set: Not Supported 00:10:12.961 Extended LBA Formats Supported: Supported 00:10:12.961 Flexible Data Placement Supported: Not Supported 00:10:12.961 00:10:12.961 Controller Memory Buffer Support 00:10:12.961 ================================ 00:10:12.961 Supported: No 00:10:12.961 00:10:12.961 Persistent Memory Region Support 00:10:12.961 ================================ 00:10:12.961 Supported: No 00:10:12.961 00:10:12.961 Admin Command Set Attributes 00:10:12.961 ============================ 00:10:12.961 Security Send/Receive: Not Supported 00:10:12.961 Format NVM: Supported 00:10:12.961 Firmware Activate/Download: Not Supported 00:10:12.961 Namespace Management: Supported 00:10:12.961 Device Self-Test: Not Supported 00:10:12.961 Directives: Supported 00:10:12.961 NVMe-MI: Not Supported 00:10:12.961 Virtualization Management: Not Supported 00:10:12.961 Doorbell Buffer Config: Supported 00:10:12.961 Get LBA Status Capability: Not Supported 00:10:12.961 Command & Feature Lockdown Capability: Not Supported 00:10:12.961 Abort Command Limit: 4 00:10:12.961 Async Event Request Limit: 4 00:10:12.961 Number of Firmware Slots: N/A 00:10:12.961 Firmware Slot 1 Read-Only: N/A 00:10:12.961 Firmware Activation Without Reset: N/A 00:10:12.961 Multiple Update Detection Support: N/A 00:10:12.961 Firmware Update Granularity: No Information Provided 00:10:12.961 Per-Namespace SMART Log: Yes 00:10:12.961 Asymmetric Namespace Access Log Page: Not Supported 00:10:12.961 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:10:12.961 Command Effects Log Page: Supported 00:10:12.961 Get Log Page Extended Data: Supported 00:10:12.961 Telemetry Log Pages: Not Supported 00:10:12.961 Persistent Event Log Pages: Not Supported 00:10:12.961 Supported Log Pages Log Page: May Support 00:10:12.961 Commands Supported & Effects Log Page: Not Supported 00:10:12.961 Feature Identifiers & Effects Log Page:May Support 00:10:12.961 NVMe-MI Commands & Effects Log Page: May Support 00:10:12.961 Data Area 4 for Telemetry Log: Not Supported 00:10:12.961 Error Log Page Entries Supported: 1 00:10:12.961 Keep Alive: Not Supported 00:10:12.961 00:10:12.961 NVM Command Set Attributes 00:10:12.961 ========================== 00:10:12.961 Submission Queue Entry Size 00:10:12.961 Max: 64 00:10:12.961 Min: 64 00:10:12.961 Completion Queue Entry Size 00:10:12.961 Max: 16 00:10:12.961 Min: 16 00:10:12.961 Number of Namespaces: 256 00:10:12.961 Compare Command: Supported 00:10:12.961 Write Uncorrectable Command: Not Supported 00:10:12.961 Dataset Management Command: Supported 00:10:12.961 Write Zeroes Command: Supported 00:10:12.961 Set Features Save Field: Supported 00:10:12.961 Reservations: Not Supported 00:10:12.961 Timestamp: Supported 00:10:12.961 Copy: Supported 00:10:12.961 Volatile Write Cache: Present 00:10:12.961 Atomic Write Unit (Normal): 1 00:10:12.961 Atomic Write Unit (PFail): 1 00:10:12.961 Atomic Compare & Write Unit: 1 00:10:12.961 Fused Compare & Write: Not Supported 00:10:12.961 Scatter-Gather List 00:10:12.961 SGL Command Set: Supported 00:10:12.961 SGL Keyed: Not Supported 00:10:12.961 SGL Bit Bucket Descriptor: Not Supported 00:10:12.961 SGL Metadata Pointer: Not Supported 00:10:12.961 Oversized SGL: Not Supported 00:10:12.961 SGL Metadata Address: Not Supported 00:10:12.961 SGL Offset: Not Supported 00:10:12.961 Transport SGL Data Block: Not Supported 00:10:12.961 Replay Protected Memory Block: Not Supported 00:10:12.961 00:10:12.961 Firmware Slot Information 00:10:12.961 ========================= 00:10:12.961 Active slot: 1 00:10:12.961 Slot 1 Firmware Revision: 1.0 00:10:12.961 00:10:12.961 00:10:12.961 Commands Supported and Effects 00:10:12.961 ============================== 00:10:12.961 Admin Commands 00:10:12.961 -------------- 00:10:12.961 Delete I/O Submission Queue (00h): Supported 00:10:12.961 Create I/O Submission Queue (01h): Supported 00:10:12.961 Get Log Page (02h): Supported 00:10:12.961 Delete I/O Completion Queue (04h): Supported 00:10:12.961 Create I/O Completion Queue (05h): Supported 00:10:12.961 Identify (06h): Supported 00:10:12.961 Abort (08h): Supported 00:10:12.961 Set Features (09h): Supported 00:10:12.961 Get Features (0Ah): Supported 00:10:12.961 Asynchronous Event Request (0Ch): Supported 00:10:12.961 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:12.961 Directive Send (19h): Supported 00:10:12.961 Directive Receive (1Ah): Supported 00:10:12.961 Virtualization Management (1Ch): Supported 00:10:12.961 Doorbell Buffer Config (7Ch): Supported 00:10:12.961 Format NVM (80h): Supported LBA-Change 00:10:12.961 I/O Commands 00:10:12.961 ------------ 00:10:12.961 Flush (00h): Supported LBA-Change 00:10:12.961 Write (01h): Supported LBA-Change 00:10:12.961 Read (02h): Supported 00:10:12.961 Compare (05h): Supported 00:10:12.961 Write Zeroes (08h): Supported LBA-Change 00:10:12.961 Dataset Management (09h): Supported LBA-Change 00:10:12.961 Unknown (0Ch): Supported 00:10:12.961 Unknown (12h): Supported 00:10:12.961 Copy (19h): Supported LBA-Change 00:10:12.961 Unknown (1Dh): Supported LBA-Change 00:10:12.961 00:10:12.961 Error Log 00:10:12.961 ========= 00:10:12.961 00:10:12.961 Arbitration 00:10:12.961 =========== 00:10:12.961 Arbitration Burst: no limit 00:10:12.961 00:10:12.961 Power Management 00:10:12.961 ================ 00:10:12.961 Number of Power States: 1 00:10:12.961 Current Power State: Power State #0 00:10:12.961 Power State #0: 00:10:12.961 Max Power: 25.00 W 00:10:12.961 Non-Operational State: Operational 00:10:12.961 Entry Latency: 16 microseconds 00:10:12.961 Exit Latency: 4 microseconds 00:10:12.961 Relative Read Throughput: 0 00:10:12.961 Relative Read Latency: 0 00:10:12.961 Relative Write Throughput: 0 00:10:12.961 Relative Write Latency: 0 00:10:12.961 Idle Power: Not Reported 00:10:12.961 Active Power: Not Reported 00:10:12.961 Non-Operational Permissive Mode: Not Supported 00:10:12.961 00:10:12.961 Health Information 00:10:12.961 ================== 00:10:12.961 Critical Warnings: 00:10:12.961 Available Spare Space: OK 00:10:12.961 Temperature: OK 00:10:12.961 Device Reliability: OK 00:10:12.961 Read Only: No 00:10:12.961 Volatile Memory Backup: OK 00:10:12.961 Current Temperature: 323 Kelvin (50 Celsius) 00:10:12.961 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:12.961 Available Spare: 0% 00:10:12.961 Available Spare Threshold: 0% 00:10:12.961 Life Percentage Used: 0% 00:10:12.961 Data Units Read: 2481 00:10:12.961 Data Units Written: 2161 00:10:12.961 Host Read Commands: 98144 00:10:12.961 Host Write Commands: 93914 00:10:12.962 Controller Busy Time: 0 minutes 00:10:12.962 Power Cycles: 0 00:10:12.962 Power On Hours: 0 hours 00:10:12.962 Unsafe Shutdowns: 0 00:10:12.962 Unrecoverable Media Errors: 0 00:10:12.962 Lifetime Error Log Entries: 0 00:10:12.962 Warning Temperature Time: 0 minutes 00:10:12.962 Critical Temperature Time: 0 minutes 00:10:12.962 00:10:12.962 Number of Queues 00:10:12.962 ================ 00:10:12.962 Number of I/O Submission Queues: 64 00:10:12.962 Number of I/O Completion Queues: 64 00:10:12.962 00:10:12.962 ZNS Specific Controller Data 00:10:12.962 ============================ 00:10:12.962 Zone Append Size Limit: 0 00:10:12.962 00:10:12.962 00:10:12.962 Active Namespaces 00:10:12.962 ================= 00:10:12.962 Namespace ID:1 00:10:12.962 Error Recovery Timeout: Unlimited 00:10:12.962 Command Set Identifier: NVM (00h) 00:10:12.962 Deallocate: Supported 00:10:12.962 Deallocated/Unwritten Error: Supported 00:10:12.962 Deallocated Read Value: All 0x00 00:10:12.962 Deallocate in Write Zeroes: Not Supported 00:10:12.962 Deallocated Guard Field: 0xFFFF 00:10:12.962 Flush: Supported 00:10:12.962 Reservation: Not Supported 00:10:12.962 Namespace Sharing Capabilities: Private 00:10:12.962 Size (in LBAs): 1048576 (4GiB) 00:10:12.962 Capacity (in LBAs): 1048576 (4GiB) 00:10:12.962 Utilization (in LBAs): 1048576 (4GiB) 00:10:12.962 Thin Provisioning: Not Supported 00:10:12.962 Per-NS Atomic Units: No 00:10:12.962 Maximum Single Source Range Length: 128 00:10:12.962 Maximum Copy Length: 128 00:10:12.962 Maximum Source Range Count: 128 00:10:12.962 NGUID/EUI64 Never Reused: No 00:10:12.962 Namespace Write Protected: No 00:10:12.962 Number of LBA Formats: 8 00:10:12.962 Current LBA Format: LBA Format #04 00:10:12.962 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:12.962 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:12.962 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:12.962 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:12.962 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:12.962 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:12.962 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:12.962 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:12.962 00:10:12.962 Namespace ID:2 00:10:12.962 Error Recovery Timeout: Unlimited 00:10:12.962 Command Set Identifier: NVM (00h) 00:10:12.962 Deallocate: Supported 00:10:12.962 Deallocated/Unwritten Error: Supported 00:10:12.962 Deallocated Read Value: All 0x00 00:10:12.962 Deallocate in Write Zeroes: Not Supported 00:10:12.962 Deallocated Guard Field: 0xFFFF 00:10:12.962 Flush: Supported 00:10:12.962 Reservation: Not Supported 00:10:12.962 Namespace Sharing Capabilities: Private 00:10:12.962 Size (in LBAs): 1048576 (4GiB) 00:10:12.962 Capacity (in LBAs): 1048576 (4GiB) 00:10:12.962 Utilization (in LBAs): 1048576 (4GiB) 00:10:12.962 Thin Provisioning: Not Supported 00:10:12.962 Per-NS Atomic Units: No 00:10:12.962 Maximum Single Source Range Length: 128 00:10:12.962 Maximum Copy Length: 128 00:10:12.962 Maximum Source Range Count: 128 00:10:12.962 NGUID/EUI64 Never Reused: No 00:10:12.962 Namespace Write Protected: No 00:10:12.962 Number of LBA Formats: 8 00:10:12.962 Current LBA Format: LBA Format #04 00:10:12.962 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:12.962 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:12.962 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:12.962 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:12.962 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:12.962 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:12.962 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:12.962 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:12.962 00:10:12.962 Namespace ID:3 00:10:12.962 Error Recovery Timeout: Unlimited 00:10:12.962 Command Set Identifier: NVM (00h) 00:10:12.962 Deallocate: Supported 00:10:12.962 Deallocated/Unwritten Error: Supported 00:10:12.962 Deallocated Read Value: All 0x00 00:10:12.962 Deallocate in Write Zeroes: Not Supported 00:10:12.962 Deallocated Guard Field: 0xFFFF 00:10:12.962 Flush: Supported 00:10:12.962 Reservation: Not Supported 00:10:12.962 Namespace Sharing Capabilities: Private 00:10:12.962 Size (in LBAs): 1048576 (4GiB) 00:10:12.962 Capacity (in LBAs): 1048576 (4GiB) 00:10:12.962 Utilization (in LBAs): 1048576 (4GiB) 00:10:12.962 Thin Provisioning: Not Supported 00:10:12.962 Per-NS Atomic Units: No 00:10:12.962 Maximum Single Source Range Length: 128 00:10:12.962 Maximum Copy Length: 128 00:10:12.962 Maximum Source Range Count: 128 00:10:12.962 NGUID/EUI64 Never Reused: No 00:10:12.962 Namespace Write Protected: No 00:10:12.962 Number of LBA Formats: 8 00:10:12.962 Current LBA Format: LBA Format #04 00:10:12.962 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:12.962 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:12.962 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:12.962 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:12.962 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:12.962 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:12.962 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:12.962 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:12.962 00:10:12.962 18:28:12 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:12.962 18:28:12 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:10:13.222 ===================================================== 00:10:13.222 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:13.222 ===================================================== 00:10:13.222 Controller Capabilities/Features 00:10:13.222 ================================ 00:10:13.222 Vendor ID: 1b36 00:10:13.222 Subsystem Vendor ID: 1af4 00:10:13.222 Serial Number: 12340 00:10:13.222 Model Number: QEMU NVMe Ctrl 00:10:13.222 Firmware Version: 8.0.0 00:10:13.222 Recommended Arb Burst: 6 00:10:13.222 IEEE OUI Identifier: 00 54 52 00:10:13.222 Multi-path I/O 00:10:13.222 May have multiple subsystem ports: No 00:10:13.222 May have multiple controllers: No 00:10:13.222 Associated with SR-IOV VF: No 00:10:13.222 Max Data Transfer Size: 524288 00:10:13.222 Max Number of Namespaces: 256 00:10:13.222 Max Number of I/O Queues: 64 00:10:13.222 NVMe Specification Version (VS): 1.4 00:10:13.222 NVMe Specification Version (Identify): 1.4 00:10:13.222 Maximum Queue Entries: 2048 00:10:13.222 Contiguous Queues Required: Yes 00:10:13.222 Arbitration Mechanisms Supported 00:10:13.222 Weighted Round Robin: Not Supported 00:10:13.222 Vendor Specific: Not Supported 00:10:13.222 Reset Timeout: 7500 ms 00:10:13.222 Doorbell Stride: 4 bytes 00:10:13.222 NVM Subsystem Reset: Not Supported 00:10:13.222 Command Sets Supported 00:10:13.222 NVM Command Set: Supported 00:10:13.222 Boot Partition: Not Supported 00:10:13.222 Memory Page Size Minimum: 4096 bytes 00:10:13.222 Memory Page Size Maximum: 65536 bytes 00:10:13.222 Persistent Memory Region: Not Supported 00:10:13.222 Optional Asynchronous Events Supported 00:10:13.222 Namespace Attribute Notices: Supported 00:10:13.222 Firmware Activation Notices: Not Supported 00:10:13.222 ANA Change Notices: Not Supported 00:10:13.222 PLE Aggregate Log Change Notices: Not Supported 00:10:13.222 LBA Status Info Alert Notices: Not Supported 00:10:13.222 EGE Aggregate Log Change Notices: Not Supported 00:10:13.222 Normal NVM Subsystem Shutdown event: Not Supported 00:10:13.222 Zone Descriptor Change Notices: Not Supported 00:10:13.222 Discovery Log Change Notices: Not Supported 00:10:13.222 Controller Attributes 00:10:13.222 128-bit Host Identifier: Not Supported 00:10:13.222 Non-Operational Permissive Mode: Not Supported 00:10:13.222 NVM Sets: Not Supported 00:10:13.222 Read Recovery Levels: Not Supported 00:10:13.222 Endurance Groups: Not Supported 00:10:13.222 Predictable Latency Mode: Not Supported 00:10:13.222 Traffic Based Keep ALive: Not Supported 00:10:13.222 Namespace Granularity: Not Supported 00:10:13.222 SQ Associations: Not Supported 00:10:13.222 UUID List: Not Supported 00:10:13.222 Multi-Domain Subsystem: Not Supported 00:10:13.222 Fixed Capacity Management: Not Supported 00:10:13.222 Variable Capacity Management: Not Supported 00:10:13.222 Delete Endurance Group: Not Supported 00:10:13.222 Delete NVM Set: Not Supported 00:10:13.222 Extended LBA Formats Supported: Supported 00:10:13.222 Flexible Data Placement Supported: Not Supported 00:10:13.222 00:10:13.222 Controller Memory Buffer Support 00:10:13.222 ================================ 00:10:13.222 Supported: No 00:10:13.222 00:10:13.222 Persistent Memory Region Support 00:10:13.222 ================================ 00:10:13.222 Supported: No 00:10:13.222 00:10:13.222 Admin Command Set Attributes 00:10:13.222 ============================ 00:10:13.222 Security Send/Receive: Not Supported 00:10:13.222 Format NVM: Supported 00:10:13.222 Firmware Activate/Download: Not Supported 00:10:13.222 Namespace Management: Supported 00:10:13.222 Device Self-Test: Not Supported 00:10:13.222 Directives: Supported 00:10:13.222 NVMe-MI: Not Supported 00:10:13.222 Virtualization Management: Not Supported 00:10:13.222 Doorbell Buffer Config: Supported 00:10:13.222 Get LBA Status Capability: Not Supported 00:10:13.222 Command & Feature Lockdown Capability: Not Supported 00:10:13.222 Abort Command Limit: 4 00:10:13.222 Async Event Request Limit: 4 00:10:13.222 Number of Firmware Slots: N/A 00:10:13.222 Firmware Slot 1 Read-Only: N/A 00:10:13.222 Firmware Activation Without Reset: N/A 00:10:13.222 Multiple Update Detection Support: N/A 00:10:13.222 Firmware Update Granularity: No Information Provided 00:10:13.222 Per-Namespace SMART Log: Yes 00:10:13.223 Asymmetric Namespace Access Log Page: Not Supported 00:10:13.223 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:10:13.223 Command Effects Log Page: Supported 00:10:13.223 Get Log Page Extended Data: Supported 00:10:13.223 Telemetry Log Pages: Not Supported 00:10:13.223 Persistent Event Log Pages: Not Supported 00:10:13.223 Supported Log Pages Log Page: May Support 00:10:13.223 Commands Supported & Effects Log Page: Not Supported 00:10:13.223 Feature Identifiers & Effects Log Page:May Support 00:10:13.223 NVMe-MI Commands & Effects Log Page: May Support 00:10:13.223 Data Area 4 for Telemetry Log: Not Supported 00:10:13.223 Error Log Page Entries Supported: 1 00:10:13.223 Keep Alive: Not Supported 00:10:13.223 00:10:13.223 NVM Command Set Attributes 00:10:13.223 ========================== 00:10:13.223 Submission Queue Entry Size 00:10:13.223 Max: 64 00:10:13.223 Min: 64 00:10:13.223 Completion Queue Entry Size 00:10:13.223 Max: 16 00:10:13.223 Min: 16 00:10:13.223 Number of Namespaces: 256 00:10:13.223 Compare Command: Supported 00:10:13.223 Write Uncorrectable Command: Not Supported 00:10:13.223 Dataset Management Command: Supported 00:10:13.223 Write Zeroes Command: Supported 00:10:13.223 Set Features Save Field: Supported 00:10:13.223 Reservations: Not Supported 00:10:13.223 Timestamp: Supported 00:10:13.223 Copy: Supported 00:10:13.223 Volatile Write Cache: Present 00:10:13.223 Atomic Write Unit (Normal): 1 00:10:13.223 Atomic Write Unit (PFail): 1 00:10:13.223 Atomic Compare & Write Unit: 1 00:10:13.223 Fused Compare & Write: Not Supported 00:10:13.223 Scatter-Gather List 00:10:13.223 SGL Command Set: Supported 00:10:13.223 SGL Keyed: Not Supported 00:10:13.223 SGL Bit Bucket Descriptor: Not Supported 00:10:13.223 SGL Metadata Pointer: Not Supported 00:10:13.223 Oversized SGL: Not Supported 00:10:13.223 SGL Metadata Address: Not Supported 00:10:13.223 SGL Offset: Not Supported 00:10:13.223 Transport SGL Data Block: Not Supported 00:10:13.223 Replay Protected Memory Block: Not Supported 00:10:13.223 00:10:13.223 Firmware Slot Information 00:10:13.223 ========================= 00:10:13.223 Active slot: 1 00:10:13.223 Slot 1 Firmware Revision: 1.0 00:10:13.223 00:10:13.223 00:10:13.223 Commands Supported and Effects 00:10:13.223 ============================== 00:10:13.223 Admin Commands 00:10:13.223 -------------- 00:10:13.223 Delete I/O Submission Queue (00h): Supported 00:10:13.223 Create I/O Submission Queue (01h): Supported 00:10:13.223 Get Log Page (02h): Supported 00:10:13.223 Delete I/O Completion Queue (04h): Supported 00:10:13.223 Create I/O Completion Queue (05h): Supported 00:10:13.223 Identify (06h): Supported 00:10:13.223 Abort (08h): Supported 00:10:13.223 Set Features (09h): Supported 00:10:13.223 Get Features (0Ah): Supported 00:10:13.223 Asynchronous Event Request (0Ch): Supported 00:10:13.223 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:13.223 Directive Send (19h): Supported 00:10:13.223 Directive Receive (1Ah): Supported 00:10:13.223 Virtualization Management (1Ch): Supported 00:10:13.223 Doorbell Buffer Config (7Ch): Supported 00:10:13.223 Format NVM (80h): Supported LBA-Change 00:10:13.223 I/O Commands 00:10:13.223 ------------ 00:10:13.223 Flush (00h): Supported LBA-Change 00:10:13.223 Write (01h): Supported LBA-Change 00:10:13.223 Read (02h): Supported 00:10:13.223 Compare (05h): Supported 00:10:13.223 Write Zeroes (08h): Supported LBA-Change 00:10:13.223 Dataset Management (09h): Supported LBA-Change 00:10:13.223 Unknown (0Ch): Supported 00:10:13.223 Unknown (12h): Supported 00:10:13.223 Copy (19h): Supported LBA-Change 00:10:13.223 Unknown (1Dh): Supported LBA-Change 00:10:13.223 00:10:13.223 Error Log 00:10:13.223 ========= 00:10:13.223 00:10:13.223 Arbitration 00:10:13.223 =========== 00:10:13.223 Arbitration Burst: no limit 00:10:13.223 00:10:13.223 Power Management 00:10:13.223 ================ 00:10:13.223 Number of Power States: 1 00:10:13.223 Current Power State: Power State #0 00:10:13.223 Power State #0: 00:10:13.223 Max Power: 25.00 W 00:10:13.223 Non-Operational State: Operational 00:10:13.223 Entry Latency: 16 microseconds 00:10:13.223 Exit Latency: 4 microseconds 00:10:13.223 Relative Read Throughput: 0 00:10:13.223 Relative Read Latency: 0 00:10:13.223 Relative Write Throughput: 0 00:10:13.223 Relative Write Latency: 0 00:10:13.223 Idle Power: Not Reported 00:10:13.223 Active Power: Not Reported 00:10:13.223 Non-Operational Permissive Mode: Not Supported 00:10:13.223 00:10:13.223 Health Information 00:10:13.223 ================== 00:10:13.223 Critical Warnings: 00:10:13.223 Available Spare Space: OK 00:10:13.223 Temperature: OK 00:10:13.223 Device Reliability: OK 00:10:13.223 Read Only: No 00:10:13.223 Volatile Memory Backup: OK 00:10:13.223 Current Temperature: 323 Kelvin (50 Celsius) 00:10:13.223 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:13.223 Available Spare: 0% 00:10:13.223 Available Spare Threshold: 0% 00:10:13.223 Life Percentage Used: 0% 00:10:13.223 Data Units Read: 702 00:10:13.223 Data Units Written: 593 00:10:13.223 Host Read Commands: 31523 00:10:13.223 Host Write Commands: 30561 00:10:13.223 Controller Busy Time: 0 minutes 00:10:13.223 Power Cycles: 0 00:10:13.223 Power On Hours: 0 hours 00:10:13.223 Unsafe Shutdowns: 0 00:10:13.223 Unrecoverable Media Errors: 0 00:10:13.223 Lifetime Error Log Entries: 0 00:10:13.223 Warning Temperature Time: 0 minutes 00:10:13.223 Critical Temperature Time: 0 minutes 00:10:13.223 00:10:13.223 Number of Queues 00:10:13.223 ================ 00:10:13.223 Number of I/O Submission Queues: 64 00:10:13.223 Number of I/O Completion Queues: 64 00:10:13.223 00:10:13.223 ZNS Specific Controller Data 00:10:13.223 ============================ 00:10:13.223 Zone Append Size Limit: 0 00:10:13.223 00:10:13.223 00:10:13.223 Active Namespaces 00:10:13.223 ================= 00:10:13.223 Namespace ID:1 00:10:13.223 Error Recovery Timeout: Unlimited 00:10:13.223 Command Set Identifier: NVM (00h) 00:10:13.223 Deallocate: Supported 00:10:13.223 Deallocated/Unwritten Error: Supported 00:10:13.223 Deallocated Read Value: All 0x00 00:10:13.223 Deallocate in Write Zeroes: Not Supported 00:10:13.223 Deallocated Guard Field: 0xFFFF 00:10:13.223 Flush: Supported 00:10:13.223 Reservation: Not Supported 00:10:13.223 Metadata Transferred as: Separate Metadata Buffer 00:10:13.223 Namespace Sharing Capabilities: Private 00:10:13.223 Size (in LBAs): 1548666 (5GiB) 00:10:13.223 Capacity (in LBAs): 1548666 (5GiB) 00:10:13.223 Utilization (in LBAs): 1548666 (5GiB) 00:10:13.223 Thin Provisioning: Not Supported 00:10:13.223 Per-NS Atomic Units: No 00:10:13.223 Maximum Single Source Range Length: 128 00:10:13.223 Maximum Copy Length: 128 00:10:13.223 Maximum Source Range Count: 128 00:10:13.223 NGUID/EUI64 Never Reused: No 00:10:13.223 Namespace Write Protected: No 00:10:13.223 Number of LBA Formats: 8 00:10:13.223 Current LBA Format: LBA Format #07 00:10:13.223 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:13.223 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:13.223 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:13.223 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:13.223 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:13.223 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:13.223 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:13.223 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:13.223 00:10:13.223 18:28:13 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:13.223 18:28:13 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:10:13.482 ===================================================== 00:10:13.482 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:13.482 ===================================================== 00:10:13.482 Controller Capabilities/Features 00:10:13.482 ================================ 00:10:13.482 Vendor ID: 1b36 00:10:13.482 Subsystem Vendor ID: 1af4 00:10:13.482 Serial Number: 12341 00:10:13.482 Model Number: QEMU NVMe Ctrl 00:10:13.482 Firmware Version: 8.0.0 00:10:13.482 Recommended Arb Burst: 6 00:10:13.482 IEEE OUI Identifier: 00 54 52 00:10:13.482 Multi-path I/O 00:10:13.482 May have multiple subsystem ports: No 00:10:13.482 May have multiple controllers: No 00:10:13.482 Associated with SR-IOV VF: No 00:10:13.482 Max Data Transfer Size: 524288 00:10:13.482 Max Number of Namespaces: 256 00:10:13.482 Max Number of I/O Queues: 64 00:10:13.482 NVMe Specification Version (VS): 1.4 00:10:13.482 NVMe Specification Version (Identify): 1.4 00:10:13.482 Maximum Queue Entries: 2048 00:10:13.482 Contiguous Queues Required: Yes 00:10:13.482 Arbitration Mechanisms Supported 00:10:13.482 Weighted Round Robin: Not Supported 00:10:13.482 Vendor Specific: Not Supported 00:10:13.482 Reset Timeout: 7500 ms 00:10:13.482 Doorbell Stride: 4 bytes 00:10:13.482 NVM Subsystem Reset: Not Supported 00:10:13.482 Command Sets Supported 00:10:13.482 NVM Command Set: Supported 00:10:13.482 Boot Partition: Not Supported 00:10:13.482 Memory Page Size Minimum: 4096 bytes 00:10:13.482 Memory Page Size Maximum: 65536 bytes 00:10:13.482 Persistent Memory Region: Not Supported 00:10:13.482 Optional Asynchronous Events Supported 00:10:13.482 Namespace Attribute Notices: Supported 00:10:13.482 Firmware Activation Notices: Not Supported 00:10:13.482 ANA Change Notices: Not Supported 00:10:13.482 PLE Aggregate Log Change Notices: Not Supported 00:10:13.482 LBA Status Info Alert Notices: Not Supported 00:10:13.482 EGE Aggregate Log Change Notices: Not Supported 00:10:13.482 Normal NVM Subsystem Shutdown event: Not Supported 00:10:13.482 Zone Descriptor Change Notices: Not Supported 00:10:13.482 Discovery Log Change Notices: Not Supported 00:10:13.482 Controller Attributes 00:10:13.482 128-bit Host Identifier: Not Supported 00:10:13.482 Non-Operational Permissive Mode: Not Supported 00:10:13.482 NVM Sets: Not Supported 00:10:13.482 Read Recovery Levels: Not Supported 00:10:13.482 Endurance Groups: Not Supported 00:10:13.482 Predictable Latency Mode: Not Supported 00:10:13.482 Traffic Based Keep ALive: Not Supported 00:10:13.482 Namespace Granularity: Not Supported 00:10:13.482 SQ Associations: Not Supported 00:10:13.482 UUID List: Not Supported 00:10:13.482 Multi-Domain Subsystem: Not Supported 00:10:13.482 Fixed Capacity Management: Not Supported 00:10:13.482 Variable Capacity Management: Not Supported 00:10:13.482 Delete Endurance Group: Not Supported 00:10:13.482 Delete NVM Set: Not Supported 00:10:13.482 Extended LBA Formats Supported: Supported 00:10:13.482 Flexible Data Placement Supported: Not Supported 00:10:13.482 00:10:13.482 Controller Memory Buffer Support 00:10:13.482 ================================ 00:10:13.482 Supported: No 00:10:13.482 00:10:13.482 Persistent Memory Region Support 00:10:13.482 ================================ 00:10:13.482 Supported: No 00:10:13.482 00:10:13.482 Admin Command Set Attributes 00:10:13.482 ============================ 00:10:13.482 Security Send/Receive: Not Supported 00:10:13.482 Format NVM: Supported 00:10:13.482 Firmware Activate/Download: Not Supported 00:10:13.482 Namespace Management: Supported 00:10:13.482 Device Self-Test: Not Supported 00:10:13.482 Directives: Supported 00:10:13.482 NVMe-MI: Not Supported 00:10:13.482 Virtualization Management: Not Supported 00:10:13.482 Doorbell Buffer Config: Supported 00:10:13.482 Get LBA Status Capability: Not Supported 00:10:13.482 Command & Feature Lockdown Capability: Not Supported 00:10:13.482 Abort Command Limit: 4 00:10:13.482 Async Event Request Limit: 4 00:10:13.482 Number of Firmware Slots: N/A 00:10:13.482 Firmware Slot 1 Read-Only: N/A 00:10:13.482 Firmware Activation Without Reset: N/A 00:10:13.482 Multiple Update Detection Support: N/A 00:10:13.482 Firmware Update Granularity: No Information Provided 00:10:13.482 Per-Namespace SMART Log: Yes 00:10:13.482 Asymmetric Namespace Access Log Page: Not Supported 00:10:13.482 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:10:13.482 Command Effects Log Page: Supported 00:10:13.482 Get Log Page Extended Data: Supported 00:10:13.482 Telemetry Log Pages: Not Supported 00:10:13.482 Persistent Event Log Pages: Not Supported 00:10:13.482 Supported Log Pages Log Page: May Support 00:10:13.482 Commands Supported & Effects Log Page: Not Supported 00:10:13.482 Feature Identifiers & Effects Log Page:May Support 00:10:13.482 NVMe-MI Commands & Effects Log Page: May Support 00:10:13.482 Data Area 4 for Telemetry Log: Not Supported 00:10:13.482 Error Log Page Entries Supported: 1 00:10:13.482 Keep Alive: Not Supported 00:10:13.482 00:10:13.482 NVM Command Set Attributes 00:10:13.482 ========================== 00:10:13.482 Submission Queue Entry Size 00:10:13.482 Max: 64 00:10:13.482 Min: 64 00:10:13.482 Completion Queue Entry Size 00:10:13.482 Max: 16 00:10:13.482 Min: 16 00:10:13.482 Number of Namespaces: 256 00:10:13.482 Compare Command: Supported 00:10:13.482 Write Uncorrectable Command: Not Supported 00:10:13.482 Dataset Management Command: Supported 00:10:13.482 Write Zeroes Command: Supported 00:10:13.482 Set Features Save Field: Supported 00:10:13.482 Reservations: Not Supported 00:10:13.482 Timestamp: Supported 00:10:13.482 Copy: Supported 00:10:13.482 Volatile Write Cache: Present 00:10:13.482 Atomic Write Unit (Normal): 1 00:10:13.482 Atomic Write Unit (PFail): 1 00:10:13.482 Atomic Compare & Write Unit: 1 00:10:13.482 Fused Compare & Write: Not Supported 00:10:13.482 Scatter-Gather List 00:10:13.482 SGL Command Set: Supported 00:10:13.482 SGL Keyed: Not Supported 00:10:13.482 SGL Bit Bucket Descriptor: Not Supported 00:10:13.482 SGL Metadata Pointer: Not Supported 00:10:13.482 Oversized SGL: Not Supported 00:10:13.482 SGL Metadata Address: Not Supported 00:10:13.482 SGL Offset: Not Supported 00:10:13.482 Transport SGL Data Block: Not Supported 00:10:13.482 Replay Protected Memory Block: Not Supported 00:10:13.482 00:10:13.482 Firmware Slot Information 00:10:13.482 ========================= 00:10:13.482 Active slot: 1 00:10:13.482 Slot 1 Firmware Revision: 1.0 00:10:13.482 00:10:13.482 00:10:13.482 Commands Supported and Effects 00:10:13.482 ============================== 00:10:13.482 Admin Commands 00:10:13.482 -------------- 00:10:13.482 Delete I/O Submission Queue (00h): Supported 00:10:13.482 Create I/O Submission Queue (01h): Supported 00:10:13.482 Get Log Page (02h): Supported 00:10:13.482 Delete I/O Completion Queue (04h): Supported 00:10:13.482 Create I/O Completion Queue (05h): Supported 00:10:13.482 Identify (06h): Supported 00:10:13.482 Abort (08h): Supported 00:10:13.482 Set Features (09h): Supported 00:10:13.482 Get Features (0Ah): Supported 00:10:13.482 Asynchronous Event Request (0Ch): Supported 00:10:13.482 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:13.482 Directive Send (19h): Supported 00:10:13.482 Directive Receive (1Ah): Supported 00:10:13.482 Virtualization Management (1Ch): Supported 00:10:13.482 Doorbell Buffer Config (7Ch): Supported 00:10:13.482 Format NVM (80h): Supported LBA-Change 00:10:13.482 I/O Commands 00:10:13.482 ------------ 00:10:13.482 Flush (00h): Supported LBA-Change 00:10:13.482 Write (01h): Supported LBA-Change 00:10:13.482 Read (02h): Supported 00:10:13.482 Compare (05h): Supported 00:10:13.482 Write Zeroes (08h): Supported LBA-Change 00:10:13.482 Dataset Management (09h): Supported LBA-Change 00:10:13.482 Unknown (0Ch): Supported 00:10:13.482 Unknown (12h): Supported 00:10:13.482 Copy (19h): Supported LBA-Change 00:10:13.482 Unknown (1Dh): Supported LBA-Change 00:10:13.482 00:10:13.482 Error Log 00:10:13.482 ========= 00:10:13.482 00:10:13.482 Arbitration 00:10:13.482 =========== 00:10:13.482 Arbitration Burst: no limit 00:10:13.482 00:10:13.482 Power Management 00:10:13.482 ================ 00:10:13.482 Number of Power States: 1 00:10:13.482 Current Power State: Power State #0 00:10:13.482 Power State #0: 00:10:13.482 Max Power: 25.00 W 00:10:13.482 Non-Operational State: Operational 00:10:13.482 Entry Latency: 16 microseconds 00:10:13.482 Exit Latency: 4 microseconds 00:10:13.482 Relative Read Throughput: 0 00:10:13.482 Relative Read Latency: 0 00:10:13.482 Relative Write Throughput: 0 00:10:13.482 Relative Write Latency: 0 00:10:13.482 Idle Power: Not Reported 00:10:13.482 Active Power: Not Reported 00:10:13.482 Non-Operational Permissive Mode: Not Supported 00:10:13.482 00:10:13.482 Health Information 00:10:13.482 ================== 00:10:13.482 Critical Warnings: 00:10:13.482 Available Spare Space: OK 00:10:13.482 Temperature: OK 00:10:13.482 Device Reliability: OK 00:10:13.482 Read Only: No 00:10:13.482 Volatile Memory Backup: OK 00:10:13.482 Current Temperature: 323 Kelvin (50 Celsius) 00:10:13.482 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:13.482 Available Spare: 0% 00:10:13.482 Available Spare Threshold: 0% 00:10:13.482 Life Percentage Used: 0% 00:10:13.482 Data Units Read: 1155 00:10:13.482 Data Units Written: 937 00:10:13.482 Host Read Commands: 47718 00:10:13.482 Host Write Commands: 44703 00:10:13.482 Controller Busy Time: 0 minutes 00:10:13.482 Power Cycles: 0 00:10:13.482 Power On Hours: 0 hours 00:10:13.482 Unsafe Shutdowns: 0 00:10:13.482 Unrecoverable Media Errors: 0 00:10:13.482 Lifetime Error Log Entries: 0 00:10:13.482 Warning Temperature Time: 0 minutes 00:10:13.482 Critical Temperature Time: 0 minutes 00:10:13.482 00:10:13.482 Number of Queues 00:10:13.482 ================ 00:10:13.482 Number of I/O Submission Queues: 64 00:10:13.482 Number of I/O Completion Queues: 64 00:10:13.482 00:10:13.482 ZNS Specific Controller Data 00:10:13.482 ============================ 00:10:13.482 Zone Append Size Limit: 0 00:10:13.482 00:10:13.482 00:10:13.482 Active Namespaces 00:10:13.482 ================= 00:10:13.482 Namespace ID:1 00:10:13.482 Error Recovery Timeout: Unlimited 00:10:13.482 Command Set Identifier: NVM (00h) 00:10:13.482 Deallocate: Supported 00:10:13.482 Deallocated/Unwritten Error: Supported 00:10:13.482 Deallocated Read Value: All 0x00 00:10:13.482 Deallocate in Write Zeroes: Not Supported 00:10:13.482 Deallocated Guard Field: 0xFFFF 00:10:13.482 Flush: Supported 00:10:13.482 Reservation: Not Supported 00:10:13.482 Namespace Sharing Capabilities: Private 00:10:13.482 Size (in LBAs): 1310720 (5GiB) 00:10:13.482 Capacity (in LBAs): 1310720 (5GiB) 00:10:13.482 Utilization (in LBAs): 1310720 (5GiB) 00:10:13.482 Thin Provisioning: Not Supported 00:10:13.482 Per-NS Atomic Units: No 00:10:13.482 Maximum Single Source Range Length: 128 00:10:13.482 Maximum Copy Length: 128 00:10:13.482 Maximum Source Range Count: 128 00:10:13.482 NGUID/EUI64 Never Reused: No 00:10:13.482 Namespace Write Protected: No 00:10:13.482 Number of LBA Formats: 8 00:10:13.482 Current LBA Format: LBA Format #04 00:10:13.482 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:13.482 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:13.482 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:13.482 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:13.482 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:13.482 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:13.482 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:13.482 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:13.482 00:10:13.482 18:28:13 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:13.482 18:28:13 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:10:13.741 ===================================================== 00:10:13.741 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:13.741 ===================================================== 00:10:13.741 Controller Capabilities/Features 00:10:13.741 ================================ 00:10:13.741 Vendor ID: 1b36 00:10:13.741 Subsystem Vendor ID: 1af4 00:10:13.741 Serial Number: 12342 00:10:13.741 Model Number: QEMU NVMe Ctrl 00:10:13.741 Firmware Version: 8.0.0 00:10:13.741 Recommended Arb Burst: 6 00:10:13.741 IEEE OUI Identifier: 00 54 52 00:10:13.741 Multi-path I/O 00:10:13.741 May have multiple subsystem ports: No 00:10:13.741 May have multiple controllers: No 00:10:13.741 Associated with SR-IOV VF: No 00:10:13.741 Max Data Transfer Size: 524288 00:10:13.741 Max Number of Namespaces: 256 00:10:13.741 Max Number of I/O Queues: 64 00:10:13.741 NVMe Specification Version (VS): 1.4 00:10:13.741 NVMe Specification Version (Identify): 1.4 00:10:13.741 Maximum Queue Entries: 2048 00:10:13.741 Contiguous Queues Required: Yes 00:10:13.741 Arbitration Mechanisms Supported 00:10:13.741 Weighted Round Robin: Not Supported 00:10:13.741 Vendor Specific: Not Supported 00:10:13.741 Reset Timeout: 7500 ms 00:10:13.741 Doorbell Stride: 4 bytes 00:10:13.741 NVM Subsystem Reset: Not Supported 00:10:13.741 Command Sets Supported 00:10:13.741 NVM Command Set: Supported 00:10:13.741 Boot Partition: Not Supported 00:10:13.741 Memory Page Size Minimum: 4096 bytes 00:10:13.741 Memory Page Size Maximum: 65536 bytes 00:10:13.741 Persistent Memory Region: Not Supported 00:10:13.741 Optional Asynchronous Events Supported 00:10:13.741 Namespace Attribute Notices: Supported 00:10:13.741 Firmware Activation Notices: Not Supported 00:10:13.741 ANA Change Notices: Not Supported 00:10:13.741 PLE Aggregate Log Change Notices: Not Supported 00:10:13.741 LBA Status Info Alert Notices: Not Supported 00:10:13.741 EGE Aggregate Log Change Notices: Not Supported 00:10:13.741 Normal NVM Subsystem Shutdown event: Not Supported 00:10:13.741 Zone Descriptor Change Notices: Not Supported 00:10:13.741 Discovery Log Change Notices: Not Supported 00:10:13.741 Controller Attributes 00:10:13.741 128-bit Host Identifier: Not Supported 00:10:13.741 Non-Operational Permissive Mode: Not Supported 00:10:13.741 NVM Sets: Not Supported 00:10:13.741 Read Recovery Levels: Not Supported 00:10:13.741 Endurance Groups: Not Supported 00:10:13.741 Predictable Latency Mode: Not Supported 00:10:13.741 Traffic Based Keep ALive: Not Supported 00:10:13.741 Namespace Granularity: Not Supported 00:10:13.741 SQ Associations: Not Supported 00:10:13.741 UUID List: Not Supported 00:10:13.741 Multi-Domain Subsystem: Not Supported 00:10:13.741 Fixed Capacity Management: Not Supported 00:10:13.741 Variable Capacity Management: Not Supported 00:10:13.741 Delete Endurance Group: Not Supported 00:10:13.741 Delete NVM Set: Not Supported 00:10:13.741 Extended LBA Formats Supported: Supported 00:10:13.741 Flexible Data Placement Supported: Not Supported 00:10:13.741 00:10:13.741 Controller Memory Buffer Support 00:10:13.741 ================================ 00:10:13.741 Supported: No 00:10:13.741 00:10:13.741 Persistent Memory Region Support 00:10:13.741 ================================ 00:10:13.741 Supported: No 00:10:13.741 00:10:13.741 Admin Command Set Attributes 00:10:13.741 ============================ 00:10:13.741 Security Send/Receive: Not Supported 00:10:13.741 Format NVM: Supported 00:10:13.741 Firmware Activate/Download: Not Supported 00:10:13.741 Namespace Management: Supported 00:10:13.741 Device Self-Test: Not Supported 00:10:13.741 Directives: Supported 00:10:13.741 NVMe-MI: Not Supported 00:10:13.741 Virtualization Management: Not Supported 00:10:13.741 Doorbell Buffer Config: Supported 00:10:13.741 Get LBA Status Capability: Not Supported 00:10:13.741 Command & Feature Lockdown Capability: Not Supported 00:10:13.741 Abort Command Limit: 4 00:10:13.741 Async Event Request Limit: 4 00:10:13.741 Number of Firmware Slots: N/A 00:10:13.741 Firmware Slot 1 Read-Only: N/A 00:10:13.741 Firmware Activation Without Reset: N/A 00:10:13.741 Multiple Update Detection Support: N/A 00:10:13.741 Firmware Update Granularity: No Information Provided 00:10:13.741 Per-Namespace SMART Log: Yes 00:10:13.741 Asymmetric Namespace Access Log Page: Not Supported 00:10:13.741 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:10:13.741 Command Effects Log Page: Supported 00:10:13.741 Get Log Page Extended Data: Supported 00:10:13.741 Telemetry Log Pages: Not Supported 00:10:13.741 Persistent Event Log Pages: Not Supported 00:10:13.741 Supported Log Pages Log Page: May Support 00:10:13.741 Commands Supported & Effects Log Page: Not Supported 00:10:13.741 Feature Identifiers & Effects Log Page:May Support 00:10:13.741 NVMe-MI Commands & Effects Log Page: May Support 00:10:13.741 Data Area 4 for Telemetry Log: Not Supported 00:10:13.741 Error Log Page Entries Supported: 1 00:10:13.741 Keep Alive: Not Supported 00:10:13.742 00:10:13.742 NVM Command Set Attributes 00:10:13.742 ========================== 00:10:13.742 Submission Queue Entry Size 00:10:13.742 Max: 64 00:10:13.742 Min: 64 00:10:13.742 Completion Queue Entry Size 00:10:13.742 Max: 16 00:10:13.742 Min: 16 00:10:13.742 Number of Namespaces: 256 00:10:13.742 Compare Command: Supported 00:10:13.742 Write Uncorrectable Command: Not Supported 00:10:13.742 Dataset Management Command: Supported 00:10:13.742 Write Zeroes Command: Supported 00:10:13.742 Set Features Save Field: Supported 00:10:13.742 Reservations: Not Supported 00:10:13.742 Timestamp: Supported 00:10:13.742 Copy: Supported 00:10:13.742 Volatile Write Cache: Present 00:10:13.742 Atomic Write Unit (Normal): 1 00:10:13.742 Atomic Write Unit (PFail): 1 00:10:13.742 Atomic Compare & Write Unit: 1 00:10:13.742 Fused Compare & Write: Not Supported 00:10:13.742 Scatter-Gather List 00:10:13.742 SGL Command Set: Supported 00:10:13.742 SGL Keyed: Not Supported 00:10:13.742 SGL Bit Bucket Descriptor: Not Supported 00:10:13.742 SGL Metadata Pointer: Not Supported 00:10:13.742 Oversized SGL: Not Supported 00:10:13.742 SGL Metadata Address: Not Supported 00:10:13.742 SGL Offset: Not Supported 00:10:13.742 Transport SGL Data Block: Not Supported 00:10:13.742 Replay Protected Memory Block: Not Supported 00:10:13.742 00:10:13.742 Firmware Slot Information 00:10:13.742 ========================= 00:10:13.742 Active slot: 1 00:10:13.742 Slot 1 Firmware Revision: 1.0 00:10:13.742 00:10:13.742 00:10:13.742 Commands Supported and Effects 00:10:13.742 ============================== 00:10:13.742 Admin Commands 00:10:13.742 -------------- 00:10:13.742 Delete I/O Submission Queue (00h): Supported 00:10:13.742 Create I/O Submission Queue (01h): Supported 00:10:13.742 Get Log Page (02h): Supported 00:10:13.742 Delete I/O Completion Queue (04h): Supported 00:10:13.742 Create I/O Completion Queue (05h): Supported 00:10:13.742 Identify (06h): Supported 00:10:13.742 Abort (08h): Supported 00:10:13.742 Set Features (09h): Supported 00:10:13.742 Get Features (0Ah): Supported 00:10:13.742 Asynchronous Event Request (0Ch): Supported 00:10:13.742 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:13.742 Directive Send (19h): Supported 00:10:13.742 Directive Receive (1Ah): Supported 00:10:13.742 Virtualization Management (1Ch): Supported 00:10:13.742 Doorbell Buffer Config (7Ch): Supported 00:10:13.742 Format NVM (80h): Supported LBA-Change 00:10:13.742 I/O Commands 00:10:13.742 ------------ 00:10:13.742 Flush (00h): Supported LBA-Change 00:10:13.742 Write (01h): Supported LBA-Change 00:10:13.742 Read (02h): Supported 00:10:13.742 Compare (05h): Supported 00:10:13.742 Write Zeroes (08h): Supported LBA-Change 00:10:13.742 Dataset Management (09h): Supported LBA-Change 00:10:13.742 Unknown (0Ch): Supported 00:10:13.742 Unknown (12h): Supported 00:10:13.742 Copy (19h): Supported LBA-Change 00:10:13.742 Unknown (1Dh): Supported LBA-Change 00:10:13.742 00:10:13.742 Error Log 00:10:13.742 ========= 00:10:13.742 00:10:13.742 Arbitration 00:10:13.742 =========== 00:10:13.742 Arbitration Burst: no limit 00:10:13.742 00:10:13.742 Power Management 00:10:13.742 ================ 00:10:13.742 Number of Power States: 1 00:10:13.742 Current Power State: Power State #0 00:10:13.742 Power State #0: 00:10:13.742 Max Power: 25.00 W 00:10:13.742 Non-Operational State: Operational 00:10:13.742 Entry Latency: 16 microseconds 00:10:13.742 Exit Latency: 4 microseconds 00:10:13.742 Relative Read Throughput: 0 00:10:13.742 Relative Read Latency: 0 00:10:13.742 Relative Write Throughput: 0 00:10:13.742 Relative Write Latency: 0 00:10:13.742 Idle Power: Not Reported 00:10:13.742 Active Power: Not Reported 00:10:13.742 Non-Operational Permissive Mode: Not Supported 00:10:13.742 00:10:13.742 Health Information 00:10:13.742 ================== 00:10:13.742 Critical Warnings: 00:10:13.742 Available Spare Space: OK 00:10:13.742 Temperature: OK 00:10:13.742 Device Reliability: OK 00:10:13.742 Read Only: No 00:10:13.742 Volatile Memory Backup: OK 00:10:13.742 Current Temperature: 323 Kelvin (50 Celsius) 00:10:13.742 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:13.742 Available Spare: 0% 00:10:13.742 Available Spare Threshold: 0% 00:10:13.742 Life Percentage Used: 0% 00:10:13.742 Data Units Read: 2481 00:10:13.742 Data Units Written: 2161 00:10:13.742 Host Read Commands: 98144 00:10:13.742 Host Write Commands: 93914 00:10:13.742 Controller Busy Time: 0 minutes 00:10:13.742 Power Cycles: 0 00:10:13.742 Power On Hours: 0 hours 00:10:13.742 Unsafe Shutdowns: 0 00:10:13.742 Unrecoverable Media Errors: 0 00:10:13.742 Lifetime Error Log Entries: 0 00:10:13.742 Warning Temperature Time: 0 minutes 00:10:13.742 Critical Temperature Time: 0 minutes 00:10:13.742 00:10:13.742 Number of Queues 00:10:13.742 ================ 00:10:13.742 Number of I/O Submission Queues: 64 00:10:13.742 Number of I/O Completion Queues: 64 00:10:13.742 00:10:13.742 ZNS Specific Controller Data 00:10:13.742 ============================ 00:10:13.742 Zone Append Size Limit: 0 00:10:13.742 00:10:13.742 00:10:13.742 Active Namespaces 00:10:13.742 ================= 00:10:13.742 Namespace ID:1 00:10:13.742 Error Recovery Timeout: Unlimited 00:10:13.742 Command Set Identifier: NVM (00h) 00:10:13.742 Deallocate: Supported 00:10:13.742 Deallocated/Unwritten Error: Supported 00:10:13.742 Deallocated Read Value: All 0x00 00:10:13.742 Deallocate in Write Zeroes: Not Supported 00:10:13.742 Deallocated Guard Field: 0xFFFF 00:10:13.742 Flush: Supported 00:10:13.742 Reservation: Not Supported 00:10:13.742 Namespace Sharing Capabilities: Private 00:10:13.742 Size (in LBAs): 1048576 (4GiB) 00:10:13.742 Capacity (in LBAs): 1048576 (4GiB) 00:10:13.742 Utilization (in LBAs): 1048576 (4GiB) 00:10:13.742 Thin Provisioning: Not Supported 00:10:13.742 Per-NS Atomic Units: No 00:10:13.742 Maximum Single Source Range Length: 128 00:10:13.742 Maximum Copy Length: 128 00:10:13.742 Maximum Source Range Count: 128 00:10:13.742 NGUID/EUI64 Never Reused: No 00:10:13.742 Namespace Write Protected: No 00:10:13.742 Number of LBA Formats: 8 00:10:13.742 Current LBA Format: LBA Format #04 00:10:13.742 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:13.742 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:13.742 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:13.742 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:13.742 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:13.742 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:13.742 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:13.742 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:13.742 00:10:13.742 Namespace ID:2 00:10:13.742 Error Recovery Timeout: Unlimited 00:10:13.742 Command Set Identifier: NVM (00h) 00:10:13.742 Deallocate: Supported 00:10:13.742 Deallocated/Unwritten Error: Supported 00:10:13.742 Deallocated Read Value: All 0x00 00:10:13.742 Deallocate in Write Zeroes: Not Supported 00:10:13.742 Deallocated Guard Field: 0xFFFF 00:10:13.742 Flush: Supported 00:10:13.742 Reservation: Not Supported 00:10:13.742 Namespace Sharing Capabilities: Private 00:10:13.742 Size (in LBAs): 1048576 (4GiB) 00:10:13.742 Capacity (in LBAs): 1048576 (4GiB) 00:10:13.742 Utilization (in LBAs): 1048576 (4GiB) 00:10:13.742 Thin Provisioning: Not Supported 00:10:13.742 Per-NS Atomic Units: No 00:10:13.742 Maximum Single Source Range Length: 128 00:10:13.742 Maximum Copy Length: 128 00:10:13.742 Maximum Source Range Count: 128 00:10:13.742 NGUID/EUI64 Never Reused: No 00:10:13.742 Namespace Write Protected: No 00:10:13.742 Number of LBA Formats: 8 00:10:13.742 Current LBA Format: LBA Format #04 00:10:13.742 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:13.742 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:13.742 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:13.742 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:13.742 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:13.742 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:13.742 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:13.742 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:13.742 00:10:13.742 Namespace ID:3 00:10:13.742 Error Recovery Timeout: Unlimited 00:10:13.742 Command Set Identifier: NVM (00h) 00:10:13.742 Deallocate: Supported 00:10:13.742 Deallocated/Unwritten Error: Supported 00:10:13.742 Deallocated Read Value: All 0x00 00:10:13.743 Deallocate in Write Zeroes: Not Supported 00:10:13.743 Deallocated Guard Field: 0xFFFF 00:10:13.743 Flush: Supported 00:10:13.743 Reservation: Not Supported 00:10:13.743 Namespace Sharing Capabilities: Private 00:10:13.743 Size (in LBAs): 1048576 (4GiB) 00:10:13.743 Capacity (in LBAs): 1048576 (4GiB) 00:10:13.743 Utilization (in LBAs): 1048576 (4GiB) 00:10:13.743 Thin Provisioning: Not Supported 00:10:13.743 Per-NS Atomic Units: No 00:10:13.743 Maximum Single Source Range Length: 128 00:10:13.743 Maximum Copy Length: 128 00:10:13.743 Maximum Source Range Count: 128 00:10:13.743 NGUID/EUI64 Never Reused: No 00:10:13.743 Namespace Write Protected: No 00:10:13.743 Number of LBA Formats: 8 00:10:13.743 Current LBA Format: LBA Format #04 00:10:13.743 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:13.743 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:13.743 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:13.743 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:13.743 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:13.743 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:13.743 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:13.743 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:13.743 00:10:13.743 18:28:13 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:13.743 18:28:13 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:10:14.003 ===================================================== 00:10:14.003 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:14.003 ===================================================== 00:10:14.003 Controller Capabilities/Features 00:10:14.003 ================================ 00:10:14.003 Vendor ID: 1b36 00:10:14.003 Subsystem Vendor ID: 1af4 00:10:14.003 Serial Number: 12343 00:10:14.003 Model Number: QEMU NVMe Ctrl 00:10:14.003 Firmware Version: 8.0.0 00:10:14.003 Recommended Arb Burst: 6 00:10:14.003 IEEE OUI Identifier: 00 54 52 00:10:14.003 Multi-path I/O 00:10:14.003 May have multiple subsystem ports: No 00:10:14.003 May have multiple controllers: Yes 00:10:14.003 Associated with SR-IOV VF: No 00:10:14.003 Max Data Transfer Size: 524288 00:10:14.003 Max Number of Namespaces: 256 00:10:14.003 Max Number of I/O Queues: 64 00:10:14.003 NVMe Specification Version (VS): 1.4 00:10:14.003 NVMe Specification Version (Identify): 1.4 00:10:14.003 Maximum Queue Entries: 2048 00:10:14.003 Contiguous Queues Required: Yes 00:10:14.003 Arbitration Mechanisms Supported 00:10:14.003 Weighted Round Robin: Not Supported 00:10:14.003 Vendor Specific: Not Supported 00:10:14.003 Reset Timeout: 7500 ms 00:10:14.003 Doorbell Stride: 4 bytes 00:10:14.003 NVM Subsystem Reset: Not Supported 00:10:14.003 Command Sets Supported 00:10:14.003 NVM Command Set: Supported 00:10:14.003 Boot Partition: Not Supported 00:10:14.003 Memory Page Size Minimum: 4096 bytes 00:10:14.003 Memory Page Size Maximum: 65536 bytes 00:10:14.003 Persistent Memory Region: Not Supported 00:10:14.003 Optional Asynchronous Events Supported 00:10:14.003 Namespace Attribute Notices: Supported 00:10:14.003 Firmware Activation Notices: Not Supported 00:10:14.003 ANA Change Notices: Not Supported 00:10:14.003 PLE Aggregate Log Change Notices: Not Supported 00:10:14.003 LBA Status Info Alert Notices: Not Supported 00:10:14.003 EGE Aggregate Log Change Notices: Not Supported 00:10:14.003 Normal NVM Subsystem Shutdown event: Not Supported 00:10:14.003 Zone Descriptor Change Notices: Not Supported 00:10:14.003 Discovery Log Change Notices: Not Supported 00:10:14.003 Controller Attributes 00:10:14.003 128-bit Host Identifier: Not Supported 00:10:14.003 Non-Operational Permissive Mode: Not Supported 00:10:14.003 NVM Sets: Not Supported 00:10:14.003 Read Recovery Levels: Not Supported 00:10:14.003 Endurance Groups: Supported 00:10:14.003 Predictable Latency Mode: Not Supported 00:10:14.003 Traffic Based Keep ALive: Not Supported 00:10:14.003 Namespace Granularity: Not Supported 00:10:14.003 SQ Associations: Not Supported 00:10:14.003 UUID List: Not Supported 00:10:14.003 Multi-Domain Subsystem: Not Supported 00:10:14.003 Fixed Capacity Management: Not Supported 00:10:14.003 Variable Capacity Management: Not Supported 00:10:14.003 Delete Endurance Group: Not Supported 00:10:14.003 Delete NVM Set: Not Supported 00:10:14.003 Extended LBA Formats Supported: Supported 00:10:14.003 Flexible Data Placement Supported: Supported 00:10:14.003 00:10:14.003 Controller Memory Buffer Support 00:10:14.003 ================================ 00:10:14.003 Supported: No 00:10:14.003 00:10:14.003 Persistent Memory Region Support 00:10:14.003 ================================ 00:10:14.003 Supported: No 00:10:14.003 00:10:14.003 Admin Command Set Attributes 00:10:14.003 ============================ 00:10:14.003 Security Send/Receive: Not Supported 00:10:14.003 Format NVM: Supported 00:10:14.003 Firmware Activate/Download: Not Supported 00:10:14.003 Namespace Management: Supported 00:10:14.003 Device Self-Test: Not Supported 00:10:14.003 Directives: Supported 00:10:14.003 NVMe-MI: Not Supported 00:10:14.003 Virtualization Management: Not Supported 00:10:14.003 Doorbell Buffer Config: Supported 00:10:14.003 Get LBA Status Capability: Not Supported 00:10:14.003 Command & Feature Lockdown Capability: Not Supported 00:10:14.003 Abort Command Limit: 4 00:10:14.003 Async Event Request Limit: 4 00:10:14.003 Number of Firmware Slots: N/A 00:10:14.003 Firmware Slot 1 Read-Only: N/A 00:10:14.003 Firmware Activation Without Reset: N/A 00:10:14.003 Multiple Update Detection Support: N/A 00:10:14.003 Firmware Update Granularity: No Information Provided 00:10:14.003 Per-Namespace SMART Log: Yes 00:10:14.003 Asymmetric Namespace Access Log Page: Not Supported 00:10:14.003 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:10:14.003 Command Effects Log Page: Supported 00:10:14.003 Get Log Page Extended Data: Supported 00:10:14.003 Telemetry Log Pages: Not Supported 00:10:14.003 Persistent Event Log Pages: Not Supported 00:10:14.003 Supported Log Pages Log Page: May Support 00:10:14.003 Commands Supported & Effects Log Page: Not Supported 00:10:14.003 Feature Identifiers & Effects Log Page:May Support 00:10:14.003 NVMe-MI Commands & Effects Log Page: May Support 00:10:14.003 Data Area 4 for Telemetry Log: Not Supported 00:10:14.003 Error Log Page Entries Supported: 1 00:10:14.003 Keep Alive: Not Supported 00:10:14.003 00:10:14.003 NVM Command Set Attributes 00:10:14.003 ========================== 00:10:14.003 Submission Queue Entry Size 00:10:14.003 Max: 64 00:10:14.003 Min: 64 00:10:14.003 Completion Queue Entry Size 00:10:14.003 Max: 16 00:10:14.003 Min: 16 00:10:14.003 Number of Namespaces: 256 00:10:14.003 Compare Command: Supported 00:10:14.003 Write Uncorrectable Command: Not Supported 00:10:14.003 Dataset Management Command: Supported 00:10:14.003 Write Zeroes Command: Supported 00:10:14.003 Set Features Save Field: Supported 00:10:14.003 Reservations: Not Supported 00:10:14.003 Timestamp: Supported 00:10:14.003 Copy: Supported 00:10:14.003 Volatile Write Cache: Present 00:10:14.003 Atomic Write Unit (Normal): 1 00:10:14.003 Atomic Write Unit (PFail): 1 00:10:14.003 Atomic Compare & Write Unit: 1 00:10:14.003 Fused Compare & Write: Not Supported 00:10:14.003 Scatter-Gather List 00:10:14.003 SGL Command Set: Supported 00:10:14.003 SGL Keyed: Not Supported 00:10:14.003 SGL Bit Bucket Descriptor: Not Supported 00:10:14.003 SGL Metadata Pointer: Not Supported 00:10:14.003 Oversized SGL: Not Supported 00:10:14.003 SGL Metadata Address: Not Supported 00:10:14.003 SGL Offset: Not Supported 00:10:14.003 Transport SGL Data Block: Not Supported 00:10:14.003 Replay Protected Memory Block: Not Supported 00:10:14.003 00:10:14.003 Firmware Slot Information 00:10:14.003 ========================= 00:10:14.003 Active slot: 1 00:10:14.003 Slot 1 Firmware Revision: 1.0 00:10:14.003 00:10:14.003 00:10:14.003 Commands Supported and Effects 00:10:14.003 ============================== 00:10:14.003 Admin Commands 00:10:14.004 -------------- 00:10:14.004 Delete I/O Submission Queue (00h): Supported 00:10:14.004 Create I/O Submission Queue (01h): Supported 00:10:14.004 Get Log Page (02h): Supported 00:10:14.004 Delete I/O Completion Queue (04h): Supported 00:10:14.004 Create I/O Completion Queue (05h): Supported 00:10:14.004 Identify (06h): Supported 00:10:14.004 Abort (08h): Supported 00:10:14.004 Set Features (09h): Supported 00:10:14.004 Get Features (0Ah): Supported 00:10:14.004 Asynchronous Event Request (0Ch): Supported 00:10:14.004 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:14.004 Directive Send (19h): Supported 00:10:14.004 Directive Receive (1Ah): Supported 00:10:14.004 Virtualization Management (1Ch): Supported 00:10:14.004 Doorbell Buffer Config (7Ch): Supported 00:10:14.004 Format NVM (80h): Supported LBA-Change 00:10:14.004 I/O Commands 00:10:14.004 ------------ 00:10:14.004 Flush (00h): Supported LBA-Change 00:10:14.004 Write (01h): Supported LBA-Change 00:10:14.004 Read (02h): Supported 00:10:14.004 Compare (05h): Supported 00:10:14.004 Write Zeroes (08h): Supported LBA-Change 00:10:14.004 Dataset Management (09h): Supported LBA-Change 00:10:14.004 Unknown (0Ch): Supported 00:10:14.004 Unknown (12h): Supported 00:10:14.004 Copy (19h): Supported LBA-Change 00:10:14.004 Unknown (1Dh): Supported LBA-Change 00:10:14.004 00:10:14.004 Error Log 00:10:14.004 ========= 00:10:14.004 00:10:14.004 Arbitration 00:10:14.004 =========== 00:10:14.004 Arbitration Burst: no limit 00:10:14.004 00:10:14.004 Power Management 00:10:14.004 ================ 00:10:14.004 Number of Power States: 1 00:10:14.004 Current Power State: Power State #0 00:10:14.004 Power State #0: 00:10:14.004 Max Power: 25.00 W 00:10:14.004 Non-Operational State: Operational 00:10:14.004 Entry Latency: 16 microseconds 00:10:14.004 Exit Latency: 4 microseconds 00:10:14.004 Relative Read Throughput: 0 00:10:14.004 Relative Read Latency: 0 00:10:14.004 Relative Write Throughput: 0 00:10:14.004 Relative Write Latency: 0 00:10:14.004 Idle Power: Not Reported 00:10:14.004 Active Power: Not Reported 00:10:14.004 Non-Operational Permissive Mode: Not Supported 00:10:14.004 00:10:14.004 Health Information 00:10:14.004 ================== 00:10:14.004 Critical Warnings: 00:10:14.004 Available Spare Space: OK 00:10:14.004 Temperature: OK 00:10:14.004 Device Reliability: OK 00:10:14.004 Read Only: No 00:10:14.004 Volatile Memory Backup: OK 00:10:14.004 Current Temperature: 323 Kelvin (50 Celsius) 00:10:14.004 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:14.004 Available Spare: 0% 00:10:14.004 Available Spare Threshold: 0% 00:10:14.004 Life Percentage Used: 0% 00:10:14.004 Data Units Read: 1049 00:10:14.004 Data Units Written: 942 00:10:14.004 Host Read Commands: 34446 00:10:14.004 Host Write Commands: 33036 00:10:14.004 Controller Busy Time: 0 minutes 00:10:14.004 Power Cycles: 0 00:10:14.004 Power On Hours: 0 hours 00:10:14.004 Unsafe Shutdowns: 0 00:10:14.004 Unrecoverable Media Errors: 0 00:10:14.004 Lifetime Error Log Entries: 0 00:10:14.004 Warning Temperature Time: 0 minutes 00:10:14.004 Critical Temperature Time: 0 minutes 00:10:14.004 00:10:14.004 Number of Queues 00:10:14.004 ================ 00:10:14.004 Number of I/O Submission Queues: 64 00:10:14.004 Number of I/O Completion Queues: 64 00:10:14.004 00:10:14.004 ZNS Specific Controller Data 00:10:14.004 ============================ 00:10:14.004 Zone Append Size Limit: 0 00:10:14.004 00:10:14.004 00:10:14.004 Active Namespaces 00:10:14.004 ================= 00:10:14.004 Namespace ID:1 00:10:14.004 Error Recovery Timeout: Unlimited 00:10:14.004 Command Set Identifier: NVM (00h) 00:10:14.004 Deallocate: Supported 00:10:14.004 Deallocated/Unwritten Error: Supported 00:10:14.004 Deallocated Read Value: All 0x00 00:10:14.004 Deallocate in Write Zeroes: Not Supported 00:10:14.004 Deallocated Guard Field: 0xFFFF 00:10:14.004 Flush: Supported 00:10:14.004 Reservation: Not Supported 00:10:14.004 Namespace Sharing Capabilities: Multiple Controllers 00:10:14.004 Size (in LBAs): 262144 (1GiB) 00:10:14.004 Capacity (in LBAs): 262144 (1GiB) 00:10:14.004 Utilization (in LBAs): 262144 (1GiB) 00:10:14.004 Thin Provisioning: Not Supported 00:10:14.004 Per-NS Atomic Units: No 00:10:14.004 Maximum Single Source Range Length: 128 00:10:14.004 Maximum Copy Length: 128 00:10:14.004 Maximum Source Range Count: 128 00:10:14.004 NGUID/EUI64 Never Reused: No 00:10:14.004 Namespace Write Protected: No 00:10:14.004 Endurance group ID: 1 00:10:14.004 Number of LBA Formats: 8 00:10:14.004 Current LBA Format: LBA Format #04 00:10:14.004 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:14.004 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:14.004 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:14.004 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:14.004 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:14.004 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:14.004 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:14.004 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:14.004 00:10:14.004 Get Feature FDP: 00:10:14.004 ================ 00:10:14.004 Enabled: Yes 00:10:14.004 FDP configuration index: 0 00:10:14.004 00:10:14.004 FDP configurations log page 00:10:14.004 =========================== 00:10:14.004 Number of FDP configurations: 1 00:10:14.004 Version: 0 00:10:14.004 Size: 112 00:10:14.004 FDP Configuration Descriptor: 0 00:10:14.004 Descriptor Size: 96 00:10:14.004 Reclaim Group Identifier format: 2 00:10:14.004 FDP Volatile Write Cache: Not Present 00:10:14.004 FDP Configuration: Valid 00:10:14.004 Vendor Specific Size: 0 00:10:14.004 Number of Reclaim Groups: 2 00:10:14.004 Number of Recalim Unit Handles: 8 00:10:14.004 Max Placement Identifiers: 128 00:10:14.004 Number of Namespaces Suppprted: 256 00:10:14.004 Reclaim unit Nominal Size: 6000000 bytes 00:10:14.004 Estimated Reclaim Unit Time Limit: Not Reported 00:10:14.004 RUH Desc #000: RUH Type: Initially Isolated 00:10:14.004 RUH Desc #001: RUH Type: Initially Isolated 00:10:14.004 RUH Desc #002: RUH Type: Initially Isolated 00:10:14.004 RUH Desc #003: RUH Type: Initially Isolated 00:10:14.004 RUH Desc #004: RUH Type: Initially Isolated 00:10:14.004 RUH Desc #005: RUH Type: Initially Isolated 00:10:14.004 RUH Desc #006: RUH Type: Initially Isolated 00:10:14.004 RUH Desc #007: RUH Type: Initially Isolated 00:10:14.004 00:10:14.004 FDP reclaim unit handle usage log page 00:10:14.004 ====================================== 00:10:14.004 Number of Reclaim Unit Handles: 8 00:10:14.004 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:10:14.004 RUH Usage Desc #001: RUH Attributes: Unused 00:10:14.004 RUH Usage Desc #002: RUH Attributes: Unused 00:10:14.004 RUH Usage Desc #003: RUH Attributes: Unused 00:10:14.004 RUH Usage Desc #004: RUH Attributes: Unused 00:10:14.004 RUH Usage Desc #005: RUH Attributes: Unused 00:10:14.004 RUH Usage Desc #006: RUH Attributes: Unused 00:10:14.004 RUH Usage Desc #007: RUH Attributes: Unused 00:10:14.004 00:10:14.004 FDP statistics log page 00:10:14.004 ======================= 00:10:14.004 Host bytes with metadata written: 574464000 00:10:14.004 Media bytes with metadata written: 575852544 00:10:14.004 Media bytes erased: 0 00:10:14.004 00:10:14.004 FDP events log page 00:10:14.004 =================== 00:10:14.004 Number of FDP events: 0 00:10:14.004 00:10:14.004 00:10:14.004 real 0m1.342s 00:10:14.004 user 0m0.469s 00:10:14.004 sys 0m0.646s 00:10:14.004 18:28:13 nvme.nvme_identify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:14.004 18:28:13 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:10:14.004 ************************************ 00:10:14.004 END TEST nvme_identify 00:10:14.005 ************************************ 00:10:14.005 18:28:13 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:10:14.005 18:28:13 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:14.005 18:28:13 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:14.005 18:28:13 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:14.005 ************************************ 00:10:14.005 START TEST nvme_perf 00:10:14.005 ************************************ 00:10:14.005 18:28:13 nvme.nvme_perf -- common/autotest_common.sh@1121 -- # nvme_perf 00:10:14.005 18:28:13 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:10:15.414 Initializing NVMe Controllers 00:10:15.414 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:15.414 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:15.414 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:15.414 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:15.414 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:10:15.414 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:10:15.414 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:10:15.414 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:10:15.414 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:10:15.414 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:10:15.414 Initialization complete. Launching workers. 00:10:15.414 ======================================================== 00:10:15.414 Latency(us) 00:10:15.414 Device Information : IOPS MiB/s Average min max 00:10:15.414 PCIE (0000:00:10.0) NSID 1 from core 0: 14889.84 174.49 8601.39 6457.79 36843.48 00:10:15.414 PCIE (0000:00:11.0) NSID 1 from core 0: 14889.84 174.49 8596.06 6269.36 36163.29 00:10:15.414 PCIE (0000:00:13.0) NSID 1 from core 0: 14889.84 174.49 8589.27 5277.26 36222.36 00:10:15.414 PCIE (0000:00:12.0) NSID 1 from core 0: 14889.84 174.49 8582.36 4923.81 35742.70 00:10:15.414 PCIE (0000:00:12.0) NSID 2 from core 0: 14889.84 174.49 8575.29 4636.96 35289.97 00:10:15.414 PCIE (0000:00:12.0) NSID 3 from core 0: 14953.75 175.24 8531.47 4354.97 30073.13 00:10:15.414 ======================================================== 00:10:15.414 Total : 89402.97 1047.69 8579.27 4354.97 36843.48 00:10:15.414 00:10:15.414 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:10:15.414 ================================================================================= 00:10:15.414 1.00000% : 7383.532us 00:10:15.414 10.00000% : 7669.715us 00:10:15.414 25.00000% : 7898.662us 00:10:15.414 50.00000% : 8242.082us 00:10:15.414 75.00000% : 8585.502us 00:10:15.414 90.00000% : 9329.579us 00:10:15.414 95.00000% : 10073.656us 00:10:15.414 98.00000% : 12534.833us 00:10:15.414 99.00000% : 18430.211us 00:10:15.414 99.50000% : 30678.861us 00:10:15.414 99.90000% : 36631.476us 00:10:15.414 99.99000% : 36860.423us 00:10:15.414 99.99900% : 36860.423us 00:10:15.414 99.99990% : 36860.423us 00:10:15.414 99.99999% : 36860.423us 00:10:15.414 00:10:15.414 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:10:15.414 ================================================================================= 00:10:15.414 1.00000% : 7440.769us 00:10:15.414 10.00000% : 7726.952us 00:10:15.414 25.00000% : 7955.899us 00:10:15.414 50.00000% : 8242.082us 00:10:15.415 75.00000% : 8528.266us 00:10:15.415 90.00000% : 9329.579us 00:10:15.415 95.00000% : 10016.419us 00:10:15.415 98.00000% : 13450.620us 00:10:15.415 99.00000% : 18888.105us 00:10:15.415 99.50000% : 30449.914us 00:10:15.415 99.90000% : 35944.636us 00:10:15.415 99.99000% : 36173.583us 00:10:15.415 99.99900% : 36173.583us 00:10:15.415 99.99990% : 36173.583us 00:10:15.415 99.99999% : 36173.583us 00:10:15.415 00:10:15.415 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:10:15.415 ================================================================================= 00:10:15.415 1.00000% : 7440.769us 00:10:15.415 10.00000% : 7726.952us 00:10:15.415 25.00000% : 7955.899us 00:10:15.415 50.00000% : 8242.082us 00:10:15.415 75.00000% : 8528.266us 00:10:15.415 90.00000% : 9329.579us 00:10:15.415 95.00000% : 9959.183us 00:10:15.415 98.00000% : 13794.040us 00:10:15.415 99.00000% : 18544.685us 00:10:15.415 99.50000% : 30907.808us 00:10:15.415 99.90000% : 36173.583us 00:10:15.415 99.99000% : 36402.529us 00:10:15.415 99.99900% : 36402.529us 00:10:15.415 99.99990% : 36402.529us 00:10:15.415 99.99999% : 36402.529us 00:10:15.415 00:10:15.415 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:10:15.415 ================================================================================= 00:10:15.415 1.00000% : 7440.769us 00:10:15.415 10.00000% : 7726.952us 00:10:15.415 25.00000% : 7955.899us 00:10:15.415 50.00000% : 8242.082us 00:10:15.415 75.00000% : 8528.266us 00:10:15.415 90.00000% : 9329.579us 00:10:15.415 95.00000% : 10016.419us 00:10:15.415 98.00000% : 13679.567us 00:10:15.415 99.00000% : 17972.318us 00:10:15.415 99.50000% : 30449.914us 00:10:15.415 99.90000% : 35715.689us 00:10:15.415 99.99000% : 35944.636us 00:10:15.415 99.99900% : 35944.636us 00:10:15.415 99.99990% : 35944.636us 00:10:15.415 99.99999% : 35944.636us 00:10:15.415 00:10:15.415 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:10:15.415 ================================================================================= 00:10:15.415 1.00000% : 7440.769us 00:10:15.415 10.00000% : 7726.952us 00:10:15.415 25.00000% : 7955.899us 00:10:15.415 50.00000% : 8242.082us 00:10:15.415 75.00000% : 8528.266us 00:10:15.415 90.00000% : 9329.579us 00:10:15.415 95.00000% : 10016.419us 00:10:15.415 98.00000% : 13049.963us 00:10:15.415 99.00000% : 17399.951us 00:10:15.415 99.50000% : 29992.021us 00:10:15.415 99.90000% : 35257.796us 00:10:15.415 99.99000% : 35486.742us 00:10:15.415 99.99900% : 35486.742us 00:10:15.415 99.99990% : 35486.742us 00:10:15.415 99.99999% : 35486.742us 00:10:15.415 00:10:15.415 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:10:15.415 ================================================================================= 00:10:15.415 1.00000% : 7383.532us 00:10:15.415 10.00000% : 7726.952us 00:10:15.415 25.00000% : 7955.899us 00:10:15.415 50.00000% : 8242.082us 00:10:15.415 75.00000% : 8528.266us 00:10:15.415 90.00000% : 9386.816us 00:10:15.415 95.00000% : 10016.419us 00:10:15.415 98.00000% : 12363.123us 00:10:15.415 99.00000% : 17628.898us 00:10:15.415 99.50000% : 24153.879us 00:10:15.415 99.90000% : 29992.021us 00:10:15.415 99.99000% : 30220.968us 00:10:15.415 99.99900% : 30220.968us 00:10:15.415 99.99990% : 30220.968us 00:10:15.415 99.99999% : 30220.968us 00:10:15.415 00:10:15.415 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:10:15.415 ============================================================================== 00:10:15.415 Range in us Cumulative IO count 00:10:15.415 6439.127 - 6467.745: 0.0134% ( 2) 00:10:15.415 6467.745 - 6496.363: 0.0402% ( 4) 00:10:15.415 6496.363 - 6524.982: 0.0469% ( 1) 00:10:15.415 6524.982 - 6553.600: 0.0536% ( 1) 00:10:15.415 6553.600 - 6582.218: 0.0671% ( 2) 00:10:15.415 6582.218 - 6610.837: 0.0805% ( 2) 00:10:15.415 6610.837 - 6639.455: 0.0872% ( 1) 00:10:15.415 6639.455 - 6668.073: 0.1006% ( 2) 00:10:15.415 6668.073 - 6696.692: 0.1140% ( 2) 00:10:15.415 6696.692 - 6725.310: 0.1207% ( 1) 00:10:15.415 6725.310 - 6753.928: 0.1341% ( 2) 00:10:15.415 6753.928 - 6782.547: 0.1475% ( 2) 00:10:15.415 6782.547 - 6811.165: 0.1609% ( 2) 00:10:15.415 6811.165 - 6839.783: 0.1744% ( 2) 00:10:15.415 6839.783 - 6868.402: 0.1878% ( 2) 00:10:15.415 6868.402 - 6897.020: 0.2012% ( 2) 00:10:15.415 6925.638 - 6954.257: 0.2146% ( 2) 00:10:15.415 6954.257 - 6982.875: 0.2280% ( 2) 00:10:15.415 6982.875 - 7011.493: 0.2347% ( 1) 00:10:15.415 7011.493 - 7040.112: 0.2548% ( 3) 00:10:15.415 7040.112 - 7068.730: 0.2615% ( 1) 00:10:15.415 7068.730 - 7097.348: 0.2749% ( 2) 00:10:15.415 7097.348 - 7125.967: 0.2884% ( 2) 00:10:15.415 7125.967 - 7154.585: 0.3018% ( 2) 00:10:15.415 7154.585 - 7183.203: 0.3353% ( 5) 00:10:15.415 7183.203 - 7211.822: 0.3889% ( 8) 00:10:15.415 7211.822 - 7240.440: 0.4225% ( 5) 00:10:15.415 7240.440 - 7269.059: 0.5432% ( 18) 00:10:15.415 7269.059 - 7297.677: 0.7041% ( 24) 00:10:15.415 7297.677 - 7326.295: 0.9120% ( 31) 00:10:15.415 7326.295 - 7383.532: 1.4083% ( 74) 00:10:15.415 7383.532 - 7440.769: 2.2197% ( 121) 00:10:15.415 7440.769 - 7498.005: 3.3798% ( 173) 00:10:15.415 7498.005 - 7555.242: 5.3514% ( 294) 00:10:15.415 7555.242 - 7612.479: 7.8863% ( 378) 00:10:15.415 7612.479 - 7669.715: 10.9643% ( 459) 00:10:15.415 7669.715 - 7726.952: 14.7197% ( 560) 00:10:15.415 7726.952 - 7784.189: 18.5555% ( 572) 00:10:15.415 7784.189 - 7841.425: 22.5992% ( 603) 00:10:15.415 7841.425 - 7898.662: 26.9246% ( 645) 00:10:15.415 7898.662 - 7955.899: 31.2031% ( 638) 00:10:15.415 7955.899 - 8013.135: 35.4413% ( 632) 00:10:15.415 8013.135 - 8070.372: 39.8471% ( 657) 00:10:15.415 8070.372 - 8127.609: 44.2664% ( 659) 00:10:15.415 8127.609 - 8184.845: 48.5783% ( 643) 00:10:15.415 8184.845 - 8242.082: 53.0781% ( 671) 00:10:15.415 8242.082 - 8299.319: 57.5577% ( 668) 00:10:15.415 8299.319 - 8356.555: 61.9702% ( 658) 00:10:15.415 8356.555 - 8413.792: 66.1212% ( 619) 00:10:15.415 8413.792 - 8471.029: 70.1918% ( 607) 00:10:15.415 8471.029 - 8528.266: 74.1148% ( 585) 00:10:15.415 8528.266 - 8585.502: 77.4745% ( 501) 00:10:15.415 8585.502 - 8642.739: 80.3447% ( 428) 00:10:15.415 8642.739 - 8699.976: 82.4504% ( 314) 00:10:15.415 8699.976 - 8757.212: 84.0598% ( 240) 00:10:15.415 8757.212 - 8814.449: 85.0255% ( 144) 00:10:15.415 8814.449 - 8871.686: 85.8771% ( 127) 00:10:15.415 8871.686 - 8928.922: 86.5746% ( 104) 00:10:15.415 8928.922 - 8986.159: 87.2183% ( 96) 00:10:15.415 8986.159 - 9043.396: 87.7012% ( 72) 00:10:15.415 9043.396 - 9100.632: 88.2041% ( 75) 00:10:15.415 9100.632 - 9157.869: 88.7004% ( 74) 00:10:15.415 9157.869 - 9215.106: 89.1832% ( 72) 00:10:15.415 9215.106 - 9272.342: 89.6526% ( 70) 00:10:15.415 9272.342 - 9329.579: 90.0282% ( 56) 00:10:15.415 9329.579 - 9386.816: 90.4909% ( 69) 00:10:15.415 9386.816 - 9444.052: 90.9201% ( 64) 00:10:15.415 9444.052 - 9501.289: 91.3560% ( 65) 00:10:15.415 9501.289 - 9558.526: 91.8120% ( 68) 00:10:15.415 9558.526 - 9615.762: 92.2814% ( 70) 00:10:15.415 9615.762 - 9672.999: 92.6972% ( 62) 00:10:15.415 9672.999 - 9730.236: 93.0392% ( 51) 00:10:15.415 9730.236 - 9787.472: 93.3879% ( 52) 00:10:15.415 9787.472 - 9844.709: 93.7433% ( 53) 00:10:15.415 9844.709 - 9901.946: 94.1725% ( 64) 00:10:15.415 9901.946 - 9959.183: 94.5547% ( 57) 00:10:15.415 9959.183 - 10016.419: 94.9101% ( 53) 00:10:15.415 10016.419 - 10073.656: 95.2589% ( 52) 00:10:15.415 10073.656 - 10130.893: 95.5874% ( 49) 00:10:15.415 10130.893 - 10188.129: 95.9227% ( 50) 00:10:15.415 10188.129 - 10245.366: 96.1843% ( 39) 00:10:15.415 10245.366 - 10302.603: 96.4123% ( 34) 00:10:15.415 10302.603 - 10359.839: 96.5866% ( 26) 00:10:15.415 10359.839 - 10417.076: 96.7275% ( 21) 00:10:15.415 10417.076 - 10474.313: 96.8415% ( 17) 00:10:15.415 10474.313 - 10531.549: 96.9756% ( 20) 00:10:15.415 10531.549 - 10588.786: 97.0494% ( 11) 00:10:15.415 10588.786 - 10646.023: 97.1030% ( 8) 00:10:15.415 10646.023 - 10703.259: 97.1701% ( 10) 00:10:15.415 10703.259 - 10760.496: 97.2371% ( 10) 00:10:15.415 10760.496 - 10817.733: 97.3042% ( 10) 00:10:15.415 10817.733 - 10874.969: 97.3444% ( 6) 00:10:15.415 10874.969 - 10932.206: 97.4450% ( 15) 00:10:15.416 10932.206 - 10989.443: 97.4987% ( 8) 00:10:15.416 10989.443 - 11046.679: 97.5054% ( 1) 00:10:15.416 11046.679 - 11103.916: 97.5322% ( 4) 00:10:15.416 11103.916 - 11161.153: 97.5389% ( 1) 00:10:15.416 11161.153 - 11218.390: 97.5456% ( 1) 00:10:15.416 11218.390 - 11275.626: 97.5590% ( 2) 00:10:15.416 11275.626 - 11332.863: 97.5724% ( 2) 00:10:15.416 11332.863 - 11390.100: 97.5858% ( 2) 00:10:15.416 11390.100 - 11447.336: 97.6060% ( 3) 00:10:15.416 11447.336 - 11504.573: 97.6328% ( 4) 00:10:15.416 11504.573 - 11561.810: 97.6462% ( 2) 00:10:15.416 11561.810 - 11619.046: 97.6663% ( 3) 00:10:15.416 11619.046 - 11676.283: 97.6931% ( 4) 00:10:15.416 11676.283 - 11733.520: 97.7267% ( 5) 00:10:15.416 11733.520 - 11790.756: 97.7401% ( 2) 00:10:15.416 11790.756 - 11847.993: 97.7669% ( 4) 00:10:15.416 11847.993 - 11905.230: 97.7870% ( 3) 00:10:15.416 11905.230 - 11962.466: 97.8004% ( 2) 00:10:15.416 11962.466 - 12019.703: 97.8273% ( 4) 00:10:15.416 12019.703 - 12076.940: 97.8541% ( 4) 00:10:15.416 12076.940 - 12134.176: 97.8675% ( 2) 00:10:15.416 12134.176 - 12191.413: 97.8943% ( 4) 00:10:15.416 12191.413 - 12248.650: 97.9077% ( 2) 00:10:15.416 12248.650 - 12305.886: 97.9345% ( 4) 00:10:15.416 12305.886 - 12363.123: 97.9480% ( 2) 00:10:15.416 12363.123 - 12420.360: 97.9748% ( 4) 00:10:15.416 12420.360 - 12477.597: 97.9949% ( 3) 00:10:15.416 12477.597 - 12534.833: 98.0083% ( 2) 00:10:15.416 12534.833 - 12592.070: 98.0418% ( 5) 00:10:15.416 12592.070 - 12649.307: 98.0553% ( 2) 00:10:15.416 12649.307 - 12706.543: 98.0754% ( 3) 00:10:15.416 12706.543 - 12763.780: 98.0955% ( 3) 00:10:15.416 12763.780 - 12821.017: 98.1089% ( 2) 00:10:15.416 12821.017 - 12878.253: 98.1223% ( 2) 00:10:15.416 12878.253 - 12935.490: 98.1290% ( 1) 00:10:15.416 12935.490 - 12992.727: 98.1491% ( 3) 00:10:15.416 13049.963 - 13107.200: 98.1558% ( 1) 00:10:15.416 13107.200 - 13164.437: 98.1827% ( 4) 00:10:15.416 13164.437 - 13221.673: 98.1894% ( 1) 00:10:15.416 13221.673 - 13278.910: 98.1961% ( 1) 00:10:15.416 13278.910 - 13336.147: 98.2028% ( 1) 00:10:15.416 13336.147 - 13393.383: 98.2229% ( 3) 00:10:15.416 13393.383 - 13450.620: 98.2296% ( 1) 00:10:15.416 13450.620 - 13507.857: 98.2363% ( 1) 00:10:15.416 13507.857 - 13565.093: 98.2497% ( 2) 00:10:15.416 13565.093 - 13622.330: 98.2564% ( 1) 00:10:15.416 13622.330 - 13679.567: 98.2698% ( 2) 00:10:15.416 13679.567 - 13736.803: 98.2766% ( 1) 00:10:15.416 13736.803 - 13794.040: 98.2833% ( 1) 00:10:15.416 14251.934 - 14309.170: 98.2900% ( 1) 00:10:15.416 14309.170 - 14366.407: 98.2967% ( 1) 00:10:15.416 14366.407 - 14423.644: 98.3101% ( 2) 00:10:15.416 14480.880 - 14538.117: 98.3235% ( 2) 00:10:15.416 14595.354 - 14652.590: 98.3369% ( 2) 00:10:15.416 14652.590 - 14767.064: 98.3436% ( 1) 00:10:15.416 14767.064 - 14881.537: 98.3704% ( 4) 00:10:15.416 14881.537 - 14996.010: 98.3839% ( 2) 00:10:15.416 14996.010 - 15110.484: 98.4040% ( 3) 00:10:15.416 15110.484 - 15224.957: 98.4308% ( 4) 00:10:15.416 15224.957 - 15339.431: 98.4442% ( 2) 00:10:15.416 15339.431 - 15453.904: 98.4576% ( 2) 00:10:15.416 15453.904 - 15568.377: 98.4844% ( 4) 00:10:15.416 15568.377 - 15682.851: 98.4911% ( 1) 00:10:15.416 15682.851 - 15797.324: 98.5113% ( 3) 00:10:15.416 15797.324 - 15911.797: 98.5314% ( 3) 00:10:15.416 15911.797 - 16026.271: 98.5448% ( 2) 00:10:15.416 16026.271 - 16140.744: 98.5649% ( 3) 00:10:15.416 16140.744 - 16255.217: 98.5783% ( 2) 00:10:15.416 16255.217 - 16369.691: 98.5917% ( 2) 00:10:15.416 16369.691 - 16484.164: 98.6119% ( 3) 00:10:15.416 16484.164 - 16598.638: 98.6320% ( 3) 00:10:15.416 16598.638 - 16713.111: 98.6454% ( 2) 00:10:15.416 16713.111 - 16827.584: 98.6655% ( 3) 00:10:15.416 16827.584 - 16942.058: 98.6789% ( 2) 00:10:15.416 16942.058 - 17056.531: 98.6990% ( 3) 00:10:15.416 17056.531 - 17171.004: 98.7326% ( 5) 00:10:15.416 17171.004 - 17285.478: 98.7527% ( 3) 00:10:15.416 17285.478 - 17399.951: 98.7795% ( 4) 00:10:15.416 17399.951 - 17514.424: 98.7929% ( 2) 00:10:15.416 17514.424 - 17628.898: 98.8197% ( 4) 00:10:15.416 17628.898 - 17743.371: 98.8399% ( 3) 00:10:15.416 17743.371 - 17857.845: 98.8600% ( 3) 00:10:15.416 17857.845 - 17972.318: 98.8801% ( 3) 00:10:15.416 17972.318 - 18086.791: 98.9069% ( 4) 00:10:15.416 18086.791 - 18201.265: 98.9472% ( 6) 00:10:15.416 18201.265 - 18315.738: 98.9740% ( 4) 00:10:15.416 18315.738 - 18430.211: 99.0075% ( 5) 00:10:15.416 18430.211 - 18544.685: 99.0276% ( 3) 00:10:15.416 18544.685 - 18659.158: 99.0545% ( 4) 00:10:15.416 18659.158 - 18773.631: 99.0880% ( 5) 00:10:15.416 18773.631 - 18888.105: 99.1215% ( 5) 00:10:15.416 18888.105 - 19002.578: 99.1416% ( 3) 00:10:15.416 29534.128 - 29763.074: 99.1685% ( 4) 00:10:15.416 29763.074 - 29992.021: 99.2623% ( 14) 00:10:15.416 29992.021 - 30220.968: 99.3361% ( 11) 00:10:15.416 30220.968 - 30449.914: 99.4233% ( 13) 00:10:15.416 30449.914 - 30678.861: 99.5105% ( 13) 00:10:15.416 30678.861 - 30907.808: 99.5708% ( 9) 00:10:15.416 35486.742 - 35715.689: 99.6043% ( 5) 00:10:15.416 35715.689 - 35944.636: 99.6781% ( 11) 00:10:15.416 35944.636 - 36173.583: 99.7653% ( 13) 00:10:15.416 36173.583 - 36402.529: 99.8391% ( 11) 00:10:15.416 36402.529 - 36631.476: 99.9262% ( 13) 00:10:15.416 36631.476 - 36860.423: 100.0000% ( 11) 00:10:15.416 00:10:15.416 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:10:15.416 ============================================================================== 00:10:15.416 Range in us Cumulative IO count 00:10:15.416 6267.417 - 6296.035: 0.0201% ( 3) 00:10:15.416 6296.035 - 6324.653: 0.0402% ( 3) 00:10:15.416 6324.653 - 6353.272: 0.0536% ( 2) 00:10:15.416 6353.272 - 6381.890: 0.0671% ( 2) 00:10:15.416 6381.890 - 6410.508: 0.0805% ( 2) 00:10:15.416 6410.508 - 6439.127: 0.0872% ( 1) 00:10:15.416 6439.127 - 6467.745: 0.1140% ( 4) 00:10:15.416 6467.745 - 6496.363: 0.1341% ( 3) 00:10:15.416 6496.363 - 6524.982: 0.1475% ( 2) 00:10:15.416 6524.982 - 6553.600: 0.1609% ( 2) 00:10:15.416 6553.600 - 6582.218: 0.1677% ( 1) 00:10:15.416 6582.218 - 6610.837: 0.1811% ( 2) 00:10:15.416 6610.837 - 6639.455: 0.1945% ( 2) 00:10:15.416 6639.455 - 6668.073: 0.2012% ( 1) 00:10:15.416 6668.073 - 6696.692: 0.2146% ( 2) 00:10:15.416 6696.692 - 6725.310: 0.2280% ( 2) 00:10:15.416 6725.310 - 6753.928: 0.2414% ( 2) 00:10:15.416 6753.928 - 6782.547: 0.2548% ( 2) 00:10:15.416 6782.547 - 6811.165: 0.2682% ( 2) 00:10:15.416 6811.165 - 6839.783: 0.2817% ( 2) 00:10:15.416 6839.783 - 6868.402: 0.2951% ( 2) 00:10:15.416 6868.402 - 6897.020: 0.3085% ( 2) 00:10:15.416 6897.020 - 6925.638: 0.3219% ( 2) 00:10:15.416 6925.638 - 6954.257: 0.3353% ( 2) 00:10:15.416 6954.257 - 6982.875: 0.3487% ( 2) 00:10:15.416 6982.875 - 7011.493: 0.3688% ( 3) 00:10:15.416 7011.493 - 7040.112: 0.3822% ( 2) 00:10:15.416 7040.112 - 7068.730: 0.3957% ( 2) 00:10:15.416 7068.730 - 7097.348: 0.4091% ( 2) 00:10:15.416 7097.348 - 7125.967: 0.4225% ( 2) 00:10:15.416 7125.967 - 7154.585: 0.4292% ( 1) 00:10:15.416 7240.440 - 7269.059: 0.4359% ( 1) 00:10:15.416 7269.059 - 7297.677: 0.4761% ( 6) 00:10:15.416 7297.677 - 7326.295: 0.5700% ( 14) 00:10:15.416 7326.295 - 7383.532: 0.8986% ( 49) 00:10:15.416 7383.532 - 7440.769: 1.3948% ( 74) 00:10:15.416 7440.769 - 7498.005: 2.0923% ( 104) 00:10:15.416 7498.005 - 7555.242: 3.2859% ( 178) 00:10:15.416 7555.242 - 7612.479: 4.9624% ( 250) 00:10:15.416 7612.479 - 7669.715: 7.3297% ( 353) 00:10:15.416 7669.715 - 7726.952: 10.5821% ( 485) 00:10:15.416 7726.952 - 7784.189: 14.6996% ( 614) 00:10:15.416 7784.189 - 7841.425: 19.3737% ( 697) 00:10:15.416 7841.425 - 7898.662: 24.1752% ( 716) 00:10:15.416 7898.662 - 7955.899: 29.2650% ( 759) 00:10:15.416 7955.899 - 8013.135: 34.1738% ( 732) 00:10:15.416 8013.135 - 8070.372: 39.3374% ( 770) 00:10:15.416 8070.372 - 8127.609: 44.4810% ( 767) 00:10:15.416 8127.609 - 8184.845: 49.6446% ( 770) 00:10:15.416 8184.845 - 8242.082: 54.9021% ( 784) 00:10:15.416 8242.082 - 8299.319: 60.0054% ( 761) 00:10:15.416 8299.319 - 8356.555: 64.8136% ( 717) 00:10:15.416 8356.555 - 8413.792: 69.4877% ( 697) 00:10:15.416 8413.792 - 8471.029: 73.7057% ( 629) 00:10:15.416 8471.029 - 8528.266: 77.3940% ( 550) 00:10:15.416 8528.266 - 8585.502: 80.3313% ( 438) 00:10:15.416 8585.502 - 8642.739: 82.4973% ( 323) 00:10:15.416 8642.739 - 8699.976: 83.9525% ( 217) 00:10:15.416 8699.976 - 8757.212: 85.0657% ( 166) 00:10:15.416 8757.212 - 8814.449: 85.8436% ( 116) 00:10:15.416 8814.449 - 8871.686: 86.5545% ( 106) 00:10:15.416 8871.686 - 8928.922: 87.1781% ( 93) 00:10:15.416 8928.922 - 8986.159: 87.7213% ( 81) 00:10:15.416 8986.159 - 9043.396: 88.2242% ( 75) 00:10:15.416 9043.396 - 9100.632: 88.6601% ( 65) 00:10:15.416 9100.632 - 9157.869: 88.9753% ( 47) 00:10:15.416 9157.869 - 9215.106: 89.2637% ( 43) 00:10:15.416 9215.106 - 9272.342: 89.6862% ( 63) 00:10:15.416 9272.342 - 9329.579: 90.1355% ( 67) 00:10:15.416 9329.579 - 9386.816: 90.6451% ( 76) 00:10:15.416 9386.816 - 9444.052: 91.0877% ( 66) 00:10:15.416 9444.052 - 9501.289: 91.4834% ( 59) 00:10:15.416 9501.289 - 9558.526: 91.8991% ( 62) 00:10:15.416 9558.526 - 9615.762: 92.3350% ( 65) 00:10:15.416 9615.762 - 9672.999: 92.7642% ( 64) 00:10:15.416 9672.999 - 9730.236: 93.2068% ( 66) 00:10:15.416 9730.236 - 9787.472: 93.6494% ( 66) 00:10:15.416 9787.472 - 9844.709: 94.0585% ( 61) 00:10:15.417 9844.709 - 9901.946: 94.4675% ( 61) 00:10:15.417 9901.946 - 9959.183: 94.8632% ( 59) 00:10:15.417 9959.183 - 10016.419: 95.2924% ( 64) 00:10:15.417 10016.419 - 10073.656: 95.6880% ( 59) 00:10:15.417 10073.656 - 10130.893: 96.0703% ( 57) 00:10:15.417 10130.893 - 10188.129: 96.3720% ( 45) 00:10:15.417 10188.129 - 10245.366: 96.5397% ( 25) 00:10:15.417 10245.366 - 10302.603: 96.7006% ( 24) 00:10:15.417 10302.603 - 10359.839: 96.8415% ( 21) 00:10:15.417 10359.839 - 10417.076: 96.9555% ( 17) 00:10:15.417 10417.076 - 10474.313: 97.0762% ( 18) 00:10:15.417 10474.313 - 10531.549: 97.1567% ( 12) 00:10:15.417 10531.549 - 10588.786: 97.2438% ( 13) 00:10:15.417 10588.786 - 10646.023: 97.3176% ( 11) 00:10:15.417 10646.023 - 10703.259: 97.3914% ( 11) 00:10:15.417 10703.259 - 10760.496: 97.4450% ( 8) 00:10:15.417 10760.496 - 10817.733: 97.4920% ( 7) 00:10:15.417 10817.733 - 10874.969: 97.5255% ( 5) 00:10:15.417 10874.969 - 10932.206: 97.5389% ( 2) 00:10:15.417 10932.206 - 10989.443: 97.5523% ( 2) 00:10:15.417 10989.443 - 11046.679: 97.5657% ( 2) 00:10:15.417 11046.679 - 11103.916: 97.5858% ( 3) 00:10:15.417 11103.916 - 11161.153: 97.5992% ( 2) 00:10:15.417 11161.153 - 11218.390: 97.6127% ( 2) 00:10:15.417 11218.390 - 11275.626: 97.6261% ( 2) 00:10:15.417 11275.626 - 11332.863: 97.6462% ( 3) 00:10:15.417 11332.863 - 11390.100: 97.6596% ( 2) 00:10:15.417 11390.100 - 11447.336: 97.6730% ( 2) 00:10:15.417 11447.336 - 11504.573: 97.6931% ( 3) 00:10:15.417 11504.573 - 11561.810: 97.7065% ( 2) 00:10:15.417 11561.810 - 11619.046: 97.7267% ( 3) 00:10:15.417 11619.046 - 11676.283: 97.7401% ( 2) 00:10:15.417 11676.283 - 11733.520: 97.7535% ( 2) 00:10:15.417 11733.520 - 11790.756: 97.7736% ( 3) 00:10:15.417 11790.756 - 11847.993: 97.7870% ( 2) 00:10:15.417 11847.993 - 11905.230: 97.8004% ( 2) 00:10:15.417 11905.230 - 11962.466: 97.8205% ( 3) 00:10:15.417 11962.466 - 12019.703: 97.8340% ( 2) 00:10:15.417 12019.703 - 12076.940: 97.8474% ( 2) 00:10:15.417 12076.940 - 12134.176: 97.8541% ( 1) 00:10:15.417 12763.780 - 12821.017: 97.8608% ( 1) 00:10:15.417 12821.017 - 12878.253: 97.8742% ( 2) 00:10:15.417 12878.253 - 12935.490: 97.8876% ( 2) 00:10:15.417 12935.490 - 12992.727: 97.9077% ( 3) 00:10:15.417 12992.727 - 13049.963: 97.9144% ( 1) 00:10:15.417 13049.963 - 13107.200: 97.9278% ( 2) 00:10:15.417 13107.200 - 13164.437: 97.9345% ( 1) 00:10:15.417 13164.437 - 13221.673: 97.9480% ( 2) 00:10:15.417 13221.673 - 13278.910: 97.9614% ( 2) 00:10:15.417 13278.910 - 13336.147: 97.9748% ( 2) 00:10:15.417 13336.147 - 13393.383: 97.9882% ( 2) 00:10:15.417 13393.383 - 13450.620: 98.0016% ( 2) 00:10:15.417 13450.620 - 13507.857: 98.0150% ( 2) 00:10:15.417 13507.857 - 13565.093: 98.0284% ( 2) 00:10:15.417 13565.093 - 13622.330: 98.0418% ( 2) 00:10:15.417 13622.330 - 13679.567: 98.0486% ( 1) 00:10:15.417 13679.567 - 13736.803: 98.0620% ( 2) 00:10:15.417 13736.803 - 13794.040: 98.0754% ( 2) 00:10:15.417 13794.040 - 13851.277: 98.0821% ( 1) 00:10:15.417 13851.277 - 13908.514: 98.0955% ( 2) 00:10:15.417 13908.514 - 13965.750: 98.1290% ( 5) 00:10:15.417 13965.750 - 14022.987: 98.1558% ( 4) 00:10:15.417 14022.987 - 14080.224: 98.1760% ( 3) 00:10:15.417 14080.224 - 14137.460: 98.1961% ( 3) 00:10:15.417 14137.460 - 14194.697: 98.2229% ( 4) 00:10:15.417 14194.697 - 14251.934: 98.2430% ( 3) 00:10:15.417 14251.934 - 14309.170: 98.2698% ( 4) 00:10:15.417 14309.170 - 14366.407: 98.2900% ( 3) 00:10:15.417 14366.407 - 14423.644: 98.3168% ( 4) 00:10:15.417 14423.644 - 14480.880: 98.3369% ( 3) 00:10:15.417 14480.880 - 14538.117: 98.3570% ( 3) 00:10:15.417 14538.117 - 14595.354: 98.3839% ( 4) 00:10:15.417 14595.354 - 14652.590: 98.4040% ( 3) 00:10:15.417 14652.590 - 14767.064: 98.4442% ( 6) 00:10:15.417 14767.064 - 14881.537: 98.4710% ( 4) 00:10:15.417 14881.537 - 14996.010: 98.4844% ( 2) 00:10:15.417 14996.010 - 15110.484: 98.4979% ( 2) 00:10:15.417 15110.484 - 15224.957: 98.5247% ( 4) 00:10:15.417 15224.957 - 15339.431: 98.5448% ( 3) 00:10:15.417 15339.431 - 15453.904: 98.5649% ( 3) 00:10:15.417 15453.904 - 15568.377: 98.5850% ( 3) 00:10:15.417 15568.377 - 15682.851: 98.6052% ( 3) 00:10:15.417 15682.851 - 15797.324: 98.6253% ( 3) 00:10:15.417 15797.324 - 15911.797: 98.6454% ( 3) 00:10:15.417 15911.797 - 16026.271: 98.6655% ( 3) 00:10:15.417 16026.271 - 16140.744: 98.6856% ( 3) 00:10:15.417 16140.744 - 16255.217: 98.7057% ( 3) 00:10:15.417 16255.217 - 16369.691: 98.7124% ( 1) 00:10:15.417 17972.318 - 18086.791: 98.7527% ( 6) 00:10:15.417 18086.791 - 18201.265: 98.7862% ( 5) 00:10:15.417 18201.265 - 18315.738: 98.8197% ( 5) 00:10:15.417 18315.738 - 18430.211: 98.8600% ( 6) 00:10:15.417 18430.211 - 18544.685: 98.8935% ( 5) 00:10:15.417 18544.685 - 18659.158: 98.9270% ( 5) 00:10:15.417 18659.158 - 18773.631: 98.9673% ( 6) 00:10:15.417 18773.631 - 18888.105: 99.0008% ( 5) 00:10:15.417 18888.105 - 19002.578: 99.0343% ( 5) 00:10:15.417 19002.578 - 19117.052: 99.0679% ( 5) 00:10:15.417 19117.052 - 19231.525: 99.1014% ( 5) 00:10:15.417 19231.525 - 19345.998: 99.1349% ( 5) 00:10:15.417 19345.998 - 19460.472: 99.1416% ( 1) 00:10:15.417 29305.181 - 29534.128: 99.1483% ( 1) 00:10:15.417 29534.128 - 29763.074: 99.2422% ( 14) 00:10:15.417 29763.074 - 29992.021: 99.3361% ( 14) 00:10:15.417 29992.021 - 30220.968: 99.4367% ( 15) 00:10:15.417 30220.968 - 30449.914: 99.5373% ( 15) 00:10:15.417 30449.914 - 30678.861: 99.5708% ( 5) 00:10:15.417 35028.849 - 35257.796: 99.6245% ( 8) 00:10:15.417 35257.796 - 35486.742: 99.7183% ( 14) 00:10:15.417 35486.742 - 35715.689: 99.8122% ( 14) 00:10:15.417 35715.689 - 35944.636: 99.9061% ( 14) 00:10:15.417 35944.636 - 36173.583: 100.0000% ( 14) 00:10:15.417 00:10:15.417 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:10:15.417 ============================================================================== 00:10:15.417 Range in us Cumulative IO count 00:10:15.417 5265.775 - 5294.393: 0.0134% ( 2) 00:10:15.417 5294.393 - 5323.011: 0.0536% ( 6) 00:10:15.417 5323.011 - 5351.630: 0.0604% ( 1) 00:10:15.417 5351.630 - 5380.248: 0.0671% ( 1) 00:10:15.417 5380.248 - 5408.866: 0.0872% ( 3) 00:10:15.417 5408.866 - 5437.485: 0.1006% ( 2) 00:10:15.417 5437.485 - 5466.103: 0.1073% ( 1) 00:10:15.417 5466.103 - 5494.721: 0.1274% ( 3) 00:10:15.417 5494.721 - 5523.340: 0.1408% ( 2) 00:10:15.417 5523.340 - 5551.958: 0.1542% ( 2) 00:10:15.417 5551.958 - 5580.576: 0.1609% ( 1) 00:10:15.417 5580.576 - 5609.195: 0.1811% ( 3) 00:10:15.417 5609.195 - 5637.813: 0.1945% ( 2) 00:10:15.417 5637.813 - 5666.431: 0.2079% ( 2) 00:10:15.417 5666.431 - 5695.050: 0.2213% ( 2) 00:10:15.417 5695.050 - 5723.668: 0.2347% ( 2) 00:10:15.417 5723.668 - 5752.286: 0.2481% ( 2) 00:10:15.417 5752.286 - 5780.905: 0.2615% ( 2) 00:10:15.417 5780.905 - 5809.523: 0.2817% ( 3) 00:10:15.417 5809.523 - 5838.141: 0.2951% ( 2) 00:10:15.417 5838.141 - 5866.760: 0.3085% ( 2) 00:10:15.417 5866.760 - 5895.378: 0.3219% ( 2) 00:10:15.417 5895.378 - 5923.997: 0.3353% ( 2) 00:10:15.417 5923.997 - 5952.615: 0.3487% ( 2) 00:10:15.417 5952.615 - 5981.233: 0.3621% ( 2) 00:10:15.417 5981.233 - 6009.852: 0.3755% ( 2) 00:10:15.417 6009.852 - 6038.470: 0.3889% ( 2) 00:10:15.417 6038.470 - 6067.088: 0.4091% ( 3) 00:10:15.417 6067.088 - 6095.707: 0.4225% ( 2) 00:10:15.417 6095.707 - 6124.325: 0.4292% ( 1) 00:10:15.417 7240.440 - 7269.059: 0.4560% ( 4) 00:10:15.417 7269.059 - 7297.677: 0.4962% ( 6) 00:10:15.417 7297.677 - 7326.295: 0.5499% ( 8) 00:10:15.417 7326.295 - 7383.532: 0.8517% ( 45) 00:10:15.417 7383.532 - 7440.769: 1.3278% ( 71) 00:10:15.417 7440.769 - 7498.005: 2.1124% ( 117) 00:10:15.417 7498.005 - 7555.242: 3.3195% ( 180) 00:10:15.417 7555.242 - 7612.479: 4.9624% ( 245) 00:10:15.417 7612.479 - 7669.715: 7.2894% ( 347) 00:10:15.417 7669.715 - 7726.952: 10.7564% ( 517) 00:10:15.417 7726.952 - 7784.189: 14.8739% ( 614) 00:10:15.417 7784.189 - 7841.425: 19.3871% ( 673) 00:10:15.417 7841.425 - 7898.662: 24.2020% ( 718) 00:10:15.417 7898.662 - 7955.899: 29.1913% ( 744) 00:10:15.417 7955.899 - 8013.135: 34.1470% ( 739) 00:10:15.417 8013.135 - 8070.372: 39.1832% ( 751) 00:10:15.417 8070.372 - 8127.609: 44.2127% ( 750) 00:10:15.417 8127.609 - 8184.845: 49.5909% ( 802) 00:10:15.417 8184.845 - 8242.082: 54.8149% ( 779) 00:10:15.417 8242.082 - 8299.319: 59.9852% ( 771) 00:10:15.417 8299.319 - 8356.555: 64.8873% ( 731) 00:10:15.417 8356.555 - 8413.792: 69.4877% ( 686) 00:10:15.417 8413.792 - 8471.029: 73.8130% ( 645) 00:10:15.417 8471.029 - 8528.266: 77.4812% ( 547) 00:10:15.417 8528.266 - 8585.502: 80.4922% ( 449) 00:10:15.417 8585.502 - 8642.739: 82.6247% ( 318) 00:10:15.417 8642.739 - 8699.976: 84.0397% ( 211) 00:10:15.417 8699.976 - 8757.212: 85.1328% ( 163) 00:10:15.417 8757.212 - 8814.449: 85.9643% ( 124) 00:10:15.417 8814.449 - 8871.686: 86.6483% ( 102) 00:10:15.417 8871.686 - 8928.922: 87.2251% ( 86) 00:10:15.417 8928.922 - 8986.159: 87.7615% ( 80) 00:10:15.417 8986.159 - 9043.396: 88.2712% ( 76) 00:10:15.417 9043.396 - 9100.632: 88.7540% ( 72) 00:10:15.417 9100.632 - 9157.869: 89.1430% ( 58) 00:10:15.417 9157.869 - 9215.106: 89.4716% ( 49) 00:10:15.417 9215.106 - 9272.342: 89.9142% ( 66) 00:10:15.417 9272.342 - 9329.579: 90.3299% ( 62) 00:10:15.417 9329.579 - 9386.816: 90.8128% ( 72) 00:10:15.417 9386.816 - 9444.052: 91.3023% ( 73) 00:10:15.417 9444.052 - 9501.289: 91.7717% ( 70) 00:10:15.417 9501.289 - 9558.526: 92.1808% ( 61) 00:10:15.418 9558.526 - 9615.762: 92.6301% ( 67) 00:10:15.418 9615.762 - 9672.999: 93.0392% ( 61) 00:10:15.418 9672.999 - 9730.236: 93.4482% ( 61) 00:10:15.418 9730.236 - 9787.472: 93.8774% ( 64) 00:10:15.418 9787.472 - 9844.709: 94.2798% ( 60) 00:10:15.418 9844.709 - 9901.946: 94.6620% ( 57) 00:10:15.418 9901.946 - 9959.183: 95.0241% ( 54) 00:10:15.418 9959.183 - 10016.419: 95.3863% ( 54) 00:10:15.418 10016.419 - 10073.656: 95.7618% ( 56) 00:10:15.418 10073.656 - 10130.893: 96.0636% ( 45) 00:10:15.418 10130.893 - 10188.129: 96.3318% ( 40) 00:10:15.418 10188.129 - 10245.366: 96.5330% ( 30) 00:10:15.418 10245.366 - 10302.603: 96.6939% ( 24) 00:10:15.418 10302.603 - 10359.839: 96.8281% ( 20) 00:10:15.418 10359.839 - 10417.076: 96.9354% ( 16) 00:10:15.418 10417.076 - 10474.313: 97.0292% ( 14) 00:10:15.418 10474.313 - 10531.549: 97.1097% ( 12) 00:10:15.418 10531.549 - 10588.786: 97.1567% ( 7) 00:10:15.418 10588.786 - 10646.023: 97.2304% ( 11) 00:10:15.418 10646.023 - 10703.259: 97.2975% ( 10) 00:10:15.418 10703.259 - 10760.496: 97.3645% ( 10) 00:10:15.418 10760.496 - 10817.733: 97.4316% ( 10) 00:10:15.418 10817.733 - 10874.969: 97.4987% ( 10) 00:10:15.418 10874.969 - 10932.206: 97.5590% ( 9) 00:10:15.418 10932.206 - 10989.443: 97.5925% ( 5) 00:10:15.418 10989.443 - 11046.679: 97.6127% ( 3) 00:10:15.418 11046.679 - 11103.916: 97.6261% ( 2) 00:10:15.418 11103.916 - 11161.153: 97.6462% ( 3) 00:10:15.418 11161.153 - 11218.390: 97.6596% ( 2) 00:10:15.418 11218.390 - 11275.626: 97.6730% ( 2) 00:10:15.418 11275.626 - 11332.863: 97.6864% ( 2) 00:10:15.418 11332.863 - 11390.100: 97.6998% ( 2) 00:10:15.418 11390.100 - 11447.336: 97.7200% ( 3) 00:10:15.418 11447.336 - 11504.573: 97.7334% ( 2) 00:10:15.418 11504.573 - 11561.810: 97.7468% ( 2) 00:10:15.418 11561.810 - 11619.046: 97.7669% ( 3) 00:10:15.418 11619.046 - 11676.283: 97.7803% ( 2) 00:10:15.418 11676.283 - 11733.520: 97.7937% ( 2) 00:10:15.418 11733.520 - 11790.756: 97.8138% ( 3) 00:10:15.418 11790.756 - 11847.993: 97.8273% ( 2) 00:10:15.418 11847.993 - 11905.230: 97.8407% ( 2) 00:10:15.418 11905.230 - 11962.466: 97.8541% ( 2) 00:10:15.418 13336.147 - 13393.383: 97.8608% ( 1) 00:10:15.418 13393.383 - 13450.620: 97.8742% ( 2) 00:10:15.418 13450.620 - 13507.857: 97.9144% ( 6) 00:10:15.418 13507.857 - 13565.093: 97.9278% ( 2) 00:10:15.418 13565.093 - 13622.330: 97.9480% ( 3) 00:10:15.418 13622.330 - 13679.567: 97.9681% ( 3) 00:10:15.418 13679.567 - 13736.803: 97.9882% ( 3) 00:10:15.418 13736.803 - 13794.040: 98.0150% ( 4) 00:10:15.418 13794.040 - 13851.277: 98.0418% ( 4) 00:10:15.418 13851.277 - 13908.514: 98.0620% ( 3) 00:10:15.418 13908.514 - 13965.750: 98.0821% ( 3) 00:10:15.418 13965.750 - 14022.987: 98.1089% ( 4) 00:10:15.418 14022.987 - 14080.224: 98.1290% ( 3) 00:10:15.418 14080.224 - 14137.460: 98.1558% ( 4) 00:10:15.418 14137.460 - 14194.697: 98.1827% ( 4) 00:10:15.418 14194.697 - 14251.934: 98.2028% ( 3) 00:10:15.418 14251.934 - 14309.170: 98.2162% ( 2) 00:10:15.418 14309.170 - 14366.407: 98.2430% ( 4) 00:10:15.418 14366.407 - 14423.644: 98.2631% ( 3) 00:10:15.418 14423.644 - 14480.880: 98.2900% ( 4) 00:10:15.418 14480.880 - 14538.117: 98.3168% ( 4) 00:10:15.418 14538.117 - 14595.354: 98.3369% ( 3) 00:10:15.418 14595.354 - 14652.590: 98.3637% ( 4) 00:10:15.418 14652.590 - 14767.064: 98.4040% ( 6) 00:10:15.418 14767.064 - 14881.537: 98.4442% ( 6) 00:10:15.418 14881.537 - 14996.010: 98.4911% ( 7) 00:10:15.418 14996.010 - 15110.484: 98.5448% ( 8) 00:10:15.418 15110.484 - 15224.957: 98.5917% ( 7) 00:10:15.418 15224.957 - 15339.431: 98.6253% ( 5) 00:10:15.418 15339.431 - 15453.904: 98.6454% ( 3) 00:10:15.418 15453.904 - 15568.377: 98.6655% ( 3) 00:10:15.418 15568.377 - 15682.851: 98.6856% ( 3) 00:10:15.418 15682.851 - 15797.324: 98.7057% ( 3) 00:10:15.418 15797.324 - 15911.797: 98.7124% ( 1) 00:10:15.418 16942.058 - 17056.531: 98.7393% ( 4) 00:10:15.418 17056.531 - 17171.004: 98.7527% ( 2) 00:10:15.418 17171.004 - 17285.478: 98.7728% ( 3) 00:10:15.418 17285.478 - 17399.951: 98.7929% ( 3) 00:10:15.418 17399.951 - 17514.424: 98.8130% ( 3) 00:10:15.418 17514.424 - 17628.898: 98.8399% ( 4) 00:10:15.418 17628.898 - 17743.371: 98.8600% ( 3) 00:10:15.418 17743.371 - 17857.845: 98.8801% ( 3) 00:10:15.418 17857.845 - 17972.318: 98.9002% ( 3) 00:10:15.418 17972.318 - 18086.791: 98.9136% ( 2) 00:10:15.418 18086.791 - 18201.265: 98.9337% ( 3) 00:10:15.418 18201.265 - 18315.738: 98.9539% ( 3) 00:10:15.418 18315.738 - 18430.211: 98.9807% ( 4) 00:10:15.418 18430.211 - 18544.685: 99.0008% ( 3) 00:10:15.418 18544.685 - 18659.158: 99.0209% ( 3) 00:10:15.418 18659.158 - 18773.631: 99.0410% ( 3) 00:10:15.418 18773.631 - 18888.105: 99.0612% ( 3) 00:10:15.418 18888.105 - 19002.578: 99.0813% ( 3) 00:10:15.418 19002.578 - 19117.052: 99.1014% ( 3) 00:10:15.418 19117.052 - 19231.525: 99.1215% ( 3) 00:10:15.418 19231.525 - 19345.998: 99.1416% ( 3) 00:10:15.418 29763.074 - 29992.021: 99.2154% ( 11) 00:10:15.418 29992.021 - 30220.968: 99.3093% ( 14) 00:10:15.418 30220.968 - 30449.914: 99.3965% ( 13) 00:10:15.418 30449.914 - 30678.861: 99.4970% ( 15) 00:10:15.418 30678.861 - 30907.808: 99.5708% ( 11) 00:10:15.418 35028.849 - 35257.796: 99.6043% ( 5) 00:10:15.418 35257.796 - 35486.742: 99.6982% ( 14) 00:10:15.418 35486.742 - 35715.689: 99.7921% ( 14) 00:10:15.418 35715.689 - 35944.636: 99.8860% ( 14) 00:10:15.418 35944.636 - 36173.583: 99.9732% ( 13) 00:10:15.418 36173.583 - 36402.529: 100.0000% ( 4) 00:10:15.418 00:10:15.418 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:10:15.418 ============================================================================== 00:10:15.418 Range in us Cumulative IO count 00:10:15.418 4922.355 - 4950.973: 0.0335% ( 5) 00:10:15.418 4950.973 - 4979.591: 0.0536% ( 3) 00:10:15.418 4979.591 - 5008.210: 0.0738% ( 3) 00:10:15.418 5008.210 - 5036.828: 0.0872% ( 2) 00:10:15.418 5036.828 - 5065.446: 0.0939% ( 1) 00:10:15.418 5065.446 - 5094.065: 0.1140% ( 3) 00:10:15.418 5094.065 - 5122.683: 0.1207% ( 1) 00:10:15.418 5122.683 - 5151.301: 0.1341% ( 2) 00:10:15.418 5151.301 - 5179.920: 0.1542% ( 3) 00:10:15.418 5179.920 - 5208.538: 0.1677% ( 2) 00:10:15.418 5208.538 - 5237.156: 0.1811% ( 2) 00:10:15.418 5237.156 - 5265.775: 0.1945% ( 2) 00:10:15.418 5265.775 - 5294.393: 0.2146% ( 3) 00:10:15.418 5294.393 - 5323.011: 0.2280% ( 2) 00:10:15.418 5323.011 - 5351.630: 0.2414% ( 2) 00:10:15.418 5351.630 - 5380.248: 0.2615% ( 3) 00:10:15.418 5380.248 - 5408.866: 0.2749% ( 2) 00:10:15.418 5408.866 - 5437.485: 0.2884% ( 2) 00:10:15.418 5437.485 - 5466.103: 0.3018% ( 2) 00:10:15.418 5466.103 - 5494.721: 0.3152% ( 2) 00:10:15.418 5494.721 - 5523.340: 0.3286% ( 2) 00:10:15.418 5523.340 - 5551.958: 0.3420% ( 2) 00:10:15.418 5551.958 - 5580.576: 0.3554% ( 2) 00:10:15.418 5580.576 - 5609.195: 0.3688% ( 2) 00:10:15.418 5609.195 - 5637.813: 0.3822% ( 2) 00:10:15.418 5637.813 - 5666.431: 0.3889% ( 1) 00:10:15.418 5666.431 - 5695.050: 0.4024% ( 2) 00:10:15.418 5695.050 - 5723.668: 0.4158% ( 2) 00:10:15.418 5723.668 - 5752.286: 0.4292% ( 2) 00:10:15.418 7240.440 - 7269.059: 0.4493% ( 3) 00:10:15.418 7269.059 - 7297.677: 0.4694% ( 3) 00:10:15.418 7297.677 - 7326.295: 0.5164% ( 7) 00:10:15.418 7326.295 - 7383.532: 0.7511% ( 35) 00:10:15.418 7383.532 - 7440.769: 1.2808% ( 79) 00:10:15.418 7440.769 - 7498.005: 2.0386% ( 113) 00:10:15.418 7498.005 - 7555.242: 3.2323% ( 178) 00:10:15.418 7555.242 - 7612.479: 4.8283% ( 238) 00:10:15.418 7612.479 - 7669.715: 7.1754% ( 350) 00:10:15.418 7669.715 - 7726.952: 10.3675% ( 476) 00:10:15.418 7726.952 - 7784.189: 14.6660% ( 641) 00:10:15.418 7784.189 - 7841.425: 19.6017% ( 736) 00:10:15.418 7841.425 - 7898.662: 24.3696% ( 711) 00:10:15.418 7898.662 - 7955.899: 29.1309% ( 710) 00:10:15.418 7955.899 - 8013.135: 33.9995% ( 726) 00:10:15.418 8013.135 - 8070.372: 39.1027% ( 761) 00:10:15.418 8070.372 - 8127.609: 44.3535% ( 783) 00:10:15.418 8127.609 - 8184.845: 49.6043% ( 783) 00:10:15.418 8184.845 - 8242.082: 54.8149% ( 777) 00:10:15.418 8242.082 - 8299.319: 59.8780% ( 755) 00:10:15.418 8299.319 - 8356.555: 64.8270% ( 738) 00:10:15.418 8356.555 - 8413.792: 69.4474% ( 689) 00:10:15.418 8413.792 - 8471.029: 73.6856% ( 632) 00:10:15.418 8471.029 - 8528.266: 77.4611% ( 563) 00:10:15.418 8528.266 - 8585.502: 80.3715% ( 434) 00:10:15.418 8585.502 - 8642.739: 82.5040% ( 318) 00:10:15.418 8642.739 - 8699.976: 83.9995% ( 223) 00:10:15.418 8699.976 - 8757.212: 85.1395% ( 170) 00:10:15.418 8757.212 - 8814.449: 86.0247% ( 132) 00:10:15.418 8814.449 - 8871.686: 86.7422% ( 107) 00:10:15.418 8871.686 - 8928.922: 87.3189% ( 86) 00:10:15.418 8928.922 - 8986.159: 87.8554% ( 80) 00:10:15.418 8986.159 - 9043.396: 88.3919% ( 80) 00:10:15.418 9043.396 - 9100.632: 88.8345% ( 66) 00:10:15.418 9100.632 - 9157.869: 89.2100% ( 56) 00:10:15.418 9157.869 - 9215.106: 89.5587% ( 52) 00:10:15.418 9215.106 - 9272.342: 89.8873% ( 49) 00:10:15.418 9272.342 - 9329.579: 90.3501% ( 69) 00:10:15.418 9329.579 - 9386.816: 90.7859% ( 65) 00:10:15.418 9386.816 - 9444.052: 91.1816% ( 59) 00:10:15.418 9444.052 - 9501.289: 91.6242% ( 66) 00:10:15.418 9501.289 - 9558.526: 92.0869% ( 69) 00:10:15.418 9558.526 - 9615.762: 92.5496% ( 69) 00:10:15.418 9615.762 - 9672.999: 92.9922% ( 66) 00:10:15.418 9672.999 - 9730.236: 93.4080% ( 62) 00:10:15.419 9730.236 - 9787.472: 93.8305% ( 63) 00:10:15.419 9787.472 - 9844.709: 94.2060% ( 56) 00:10:15.419 9844.709 - 9901.946: 94.5815% ( 56) 00:10:15.419 9901.946 - 9959.183: 94.9571% ( 56) 00:10:15.419 9959.183 - 10016.419: 95.3326% ( 56) 00:10:15.419 10016.419 - 10073.656: 95.7149% ( 57) 00:10:15.419 10073.656 - 10130.893: 96.0435% ( 49) 00:10:15.419 10130.893 - 10188.129: 96.3452% ( 45) 00:10:15.419 10188.129 - 10245.366: 96.5531% ( 31) 00:10:15.419 10245.366 - 10302.603: 96.7409% ( 28) 00:10:15.419 10302.603 - 10359.839: 96.9085% ( 25) 00:10:15.419 10359.839 - 10417.076: 97.0158% ( 16) 00:10:15.419 10417.076 - 10474.313: 97.1030% ( 13) 00:10:15.419 10474.313 - 10531.549: 97.1768% ( 11) 00:10:15.419 10531.549 - 10588.786: 97.2438% ( 10) 00:10:15.419 10588.786 - 10646.023: 97.3109% ( 10) 00:10:15.419 10646.023 - 10703.259: 97.3780% ( 10) 00:10:15.419 10703.259 - 10760.496: 97.4517% ( 11) 00:10:15.419 10760.496 - 10817.733: 97.5188% ( 10) 00:10:15.419 10817.733 - 10874.969: 97.5791% ( 9) 00:10:15.419 10874.969 - 10932.206: 97.6261% ( 7) 00:10:15.419 10932.206 - 10989.443: 97.6663% ( 6) 00:10:15.419 10989.443 - 11046.679: 97.6864% ( 3) 00:10:15.419 11046.679 - 11103.916: 97.6998% ( 2) 00:10:15.419 11103.916 - 11161.153: 97.7133% ( 2) 00:10:15.419 11161.153 - 11218.390: 97.7267% ( 2) 00:10:15.419 11218.390 - 11275.626: 97.7468% ( 3) 00:10:15.419 11275.626 - 11332.863: 97.7602% ( 2) 00:10:15.419 11332.863 - 11390.100: 97.7803% ( 3) 00:10:15.419 11390.100 - 11447.336: 97.7937% ( 2) 00:10:15.419 11447.336 - 11504.573: 97.8071% ( 2) 00:10:15.419 11504.573 - 11561.810: 97.8205% ( 2) 00:10:15.419 11561.810 - 11619.046: 97.8340% ( 2) 00:10:15.419 11619.046 - 11676.283: 97.8541% ( 3) 00:10:15.419 12878.253 - 12935.490: 97.8742% ( 3) 00:10:15.419 12992.727 - 13049.963: 97.8876% ( 2) 00:10:15.419 13049.963 - 13107.200: 97.9010% ( 2) 00:10:15.419 13107.200 - 13164.437: 97.9077% ( 1) 00:10:15.419 13164.437 - 13221.673: 97.9211% ( 2) 00:10:15.419 13221.673 - 13278.910: 97.9278% ( 1) 00:10:15.419 13278.910 - 13336.147: 97.9413% ( 2) 00:10:15.419 13336.147 - 13393.383: 97.9480% ( 1) 00:10:15.419 13393.383 - 13450.620: 97.9614% ( 2) 00:10:15.419 13450.620 - 13507.857: 97.9748% ( 2) 00:10:15.419 13507.857 - 13565.093: 97.9815% ( 1) 00:10:15.419 13565.093 - 13622.330: 97.9949% ( 2) 00:10:15.419 13622.330 - 13679.567: 98.0016% ( 1) 00:10:15.419 13679.567 - 13736.803: 98.0150% ( 2) 00:10:15.419 13736.803 - 13794.040: 98.0217% ( 1) 00:10:15.419 13794.040 - 13851.277: 98.0351% ( 2) 00:10:15.419 13851.277 - 13908.514: 98.0418% ( 1) 00:10:15.419 13908.514 - 13965.750: 98.0553% ( 2) 00:10:15.419 13965.750 - 14022.987: 98.0620% ( 1) 00:10:15.419 14022.987 - 14080.224: 98.0754% ( 2) 00:10:15.419 14080.224 - 14137.460: 98.0888% ( 2) 00:10:15.419 14137.460 - 14194.697: 98.0955% ( 1) 00:10:15.419 14194.697 - 14251.934: 98.1022% ( 1) 00:10:15.419 14251.934 - 14309.170: 98.1156% ( 2) 00:10:15.419 14309.170 - 14366.407: 98.1290% ( 2) 00:10:15.419 14366.407 - 14423.644: 98.1558% ( 4) 00:10:15.419 14423.644 - 14480.880: 98.1760% ( 3) 00:10:15.419 14480.880 - 14538.117: 98.1961% ( 3) 00:10:15.419 14538.117 - 14595.354: 98.2229% ( 4) 00:10:15.419 14595.354 - 14652.590: 98.2430% ( 3) 00:10:15.419 14652.590 - 14767.064: 98.2900% ( 7) 00:10:15.419 14767.064 - 14881.537: 98.3369% ( 7) 00:10:15.419 14881.537 - 14996.010: 98.3839% ( 7) 00:10:15.419 14996.010 - 15110.484: 98.4308% ( 7) 00:10:15.419 15110.484 - 15224.957: 98.4710% ( 6) 00:10:15.419 15224.957 - 15339.431: 98.5046% ( 5) 00:10:15.419 15339.431 - 15453.904: 98.5314% ( 4) 00:10:15.419 15453.904 - 15568.377: 98.5582% ( 4) 00:10:15.419 15568.377 - 15682.851: 98.5850% ( 4) 00:10:15.419 15682.851 - 15797.324: 98.6119% ( 4) 00:10:15.419 15797.324 - 15911.797: 98.6320% ( 3) 00:10:15.419 15911.797 - 16026.271: 98.6588% ( 4) 00:10:15.419 16026.271 - 16140.744: 98.6856% ( 4) 00:10:15.419 16140.744 - 16255.217: 98.7124% ( 4) 00:10:15.419 16255.217 - 16369.691: 98.7192% ( 1) 00:10:15.419 16369.691 - 16484.164: 98.7460% ( 4) 00:10:15.419 16484.164 - 16598.638: 98.7661% ( 3) 00:10:15.419 16598.638 - 16713.111: 98.7862% ( 3) 00:10:15.419 16713.111 - 16827.584: 98.8063% ( 3) 00:10:15.419 16827.584 - 16942.058: 98.8264% ( 3) 00:10:15.419 16942.058 - 17056.531: 98.8466% ( 3) 00:10:15.419 17056.531 - 17171.004: 98.8667% ( 3) 00:10:15.419 17171.004 - 17285.478: 98.8868% ( 3) 00:10:15.419 17285.478 - 17399.951: 98.9069% ( 3) 00:10:15.419 17399.951 - 17514.424: 98.9270% ( 3) 00:10:15.419 17514.424 - 17628.898: 98.9405% ( 2) 00:10:15.419 17628.898 - 17743.371: 98.9606% ( 3) 00:10:15.419 17743.371 - 17857.845: 98.9874% ( 4) 00:10:15.419 17857.845 - 17972.318: 99.0075% ( 3) 00:10:15.419 17972.318 - 18086.791: 99.0276% ( 3) 00:10:15.419 18086.791 - 18201.265: 99.0477% ( 3) 00:10:15.419 18201.265 - 18315.738: 99.0679% ( 3) 00:10:15.419 18315.738 - 18430.211: 99.0880% ( 3) 00:10:15.419 18430.211 - 18544.685: 99.1081% ( 3) 00:10:15.419 18544.685 - 18659.158: 99.1215% ( 2) 00:10:15.419 18659.158 - 18773.631: 99.1416% ( 3) 00:10:15.419 29305.181 - 29534.128: 99.2087% ( 10) 00:10:15.419 29534.128 - 29763.074: 99.3026% ( 14) 00:10:15.419 29763.074 - 29992.021: 99.3965% ( 14) 00:10:15.419 29992.021 - 30220.968: 99.4903% ( 14) 00:10:15.419 30220.968 - 30449.914: 99.5708% ( 12) 00:10:15.419 34570.955 - 34799.902: 99.6043% ( 5) 00:10:15.419 34799.902 - 35028.849: 99.6982% ( 14) 00:10:15.419 35028.849 - 35257.796: 99.7921% ( 14) 00:10:15.419 35257.796 - 35486.742: 99.8927% ( 15) 00:10:15.419 35486.742 - 35715.689: 99.9866% ( 14) 00:10:15.419 35715.689 - 35944.636: 100.0000% ( 2) 00:10:15.419 00:10:15.419 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:10:15.419 ============================================================================== 00:10:15.419 Range in us Cumulative IO count 00:10:15.419 4636.171 - 4664.790: 0.0201% ( 3) 00:10:15.419 4664.790 - 4693.408: 0.0536% ( 5) 00:10:15.419 4693.408 - 4722.026: 0.0671% ( 2) 00:10:15.419 4722.026 - 4750.645: 0.0805% ( 2) 00:10:15.419 4750.645 - 4779.263: 0.1006% ( 3) 00:10:15.419 4779.263 - 4807.881: 0.1073% ( 1) 00:10:15.419 4807.881 - 4836.500: 0.1207% ( 2) 00:10:15.419 4836.500 - 4865.118: 0.1408% ( 3) 00:10:15.419 4865.118 - 4893.736: 0.1609% ( 3) 00:10:15.419 4893.736 - 4922.355: 0.1878% ( 4) 00:10:15.419 4922.355 - 4950.973: 0.2012% ( 2) 00:10:15.419 4950.973 - 4979.591: 0.2213% ( 3) 00:10:15.419 4979.591 - 5008.210: 0.2414% ( 3) 00:10:15.419 5008.210 - 5036.828: 0.2615% ( 3) 00:10:15.419 5036.828 - 5065.446: 0.2749% ( 2) 00:10:15.419 5065.446 - 5094.065: 0.2884% ( 2) 00:10:15.419 5094.065 - 5122.683: 0.3018% ( 2) 00:10:15.419 5122.683 - 5151.301: 0.3152% ( 2) 00:10:15.419 5151.301 - 5179.920: 0.3286% ( 2) 00:10:15.419 5179.920 - 5208.538: 0.3353% ( 1) 00:10:15.419 5208.538 - 5237.156: 0.3554% ( 3) 00:10:15.419 5237.156 - 5265.775: 0.3688% ( 2) 00:10:15.419 5265.775 - 5294.393: 0.3822% ( 2) 00:10:15.419 5294.393 - 5323.011: 0.3889% ( 1) 00:10:15.419 5323.011 - 5351.630: 0.4024% ( 2) 00:10:15.419 5351.630 - 5380.248: 0.4158% ( 2) 00:10:15.419 5380.248 - 5408.866: 0.4292% ( 2) 00:10:15.419 7211.822 - 7240.440: 0.4493% ( 3) 00:10:15.419 7240.440 - 7269.059: 0.5231% ( 11) 00:10:15.419 7269.059 - 7297.677: 0.5767% ( 8) 00:10:15.419 7297.677 - 7326.295: 0.6371% ( 9) 00:10:15.419 7326.295 - 7383.532: 0.9254% ( 43) 00:10:15.419 7383.532 - 7440.769: 1.4686% ( 81) 00:10:15.420 7440.769 - 7498.005: 2.2398% ( 115) 00:10:15.420 7498.005 - 7555.242: 3.4402% ( 179) 00:10:15.420 7555.242 - 7612.479: 5.0563% ( 241) 00:10:15.420 7612.479 - 7669.715: 7.4504% ( 357) 00:10:15.420 7669.715 - 7726.952: 10.7766% ( 496) 00:10:15.420 7726.952 - 7784.189: 15.1757% ( 656) 00:10:15.420 7784.189 - 7841.425: 19.9638% ( 714) 00:10:15.420 7841.425 - 7898.662: 24.5507% ( 684) 00:10:15.420 7898.662 - 7955.899: 29.4461% ( 730) 00:10:15.420 7955.899 - 8013.135: 34.4957% ( 753) 00:10:15.420 8013.135 - 8070.372: 39.4313% ( 736) 00:10:15.420 8070.372 - 8127.609: 44.4810% ( 753) 00:10:15.420 8127.609 - 8184.845: 49.6513% ( 771) 00:10:15.420 8184.845 - 8242.082: 54.8484% ( 775) 00:10:15.420 8242.082 - 8299.319: 59.8645% ( 748) 00:10:15.420 8299.319 - 8356.555: 64.8203% ( 739) 00:10:15.420 8356.555 - 8413.792: 69.5078% ( 699) 00:10:15.420 8413.792 - 8471.029: 73.7594% ( 634) 00:10:15.420 8471.029 - 8528.266: 77.5684% ( 568) 00:10:15.420 8528.266 - 8585.502: 80.4587% ( 431) 00:10:15.420 8585.502 - 8642.739: 82.5443% ( 311) 00:10:15.420 8642.739 - 8699.976: 84.0129% ( 219) 00:10:15.420 8699.976 - 8757.212: 85.1328% ( 167) 00:10:15.420 8757.212 - 8814.449: 85.9375% ( 120) 00:10:15.420 8814.449 - 8871.686: 86.6081% ( 100) 00:10:15.420 8871.686 - 8928.922: 87.2116% ( 90) 00:10:15.420 8928.922 - 8986.159: 87.8219% ( 91) 00:10:15.420 8986.159 - 9043.396: 88.3315% ( 76) 00:10:15.420 9043.396 - 9100.632: 88.7473% ( 62) 00:10:15.420 9100.632 - 9157.869: 89.0759% ( 49) 00:10:15.420 9157.869 - 9215.106: 89.4582% ( 57) 00:10:15.420 9215.106 - 9272.342: 89.8940% ( 65) 00:10:15.420 9272.342 - 9329.579: 90.3031% ( 61) 00:10:15.420 9329.579 - 9386.816: 90.6585% ( 53) 00:10:15.420 9386.816 - 9444.052: 91.0810% ( 63) 00:10:15.420 9444.052 - 9501.289: 91.5303% ( 67) 00:10:15.420 9501.289 - 9558.526: 91.9461% ( 62) 00:10:15.420 9558.526 - 9615.762: 92.4222% ( 71) 00:10:15.420 9615.762 - 9672.999: 92.8581% ( 65) 00:10:15.420 9672.999 - 9730.236: 93.3074% ( 67) 00:10:15.420 9730.236 - 9787.472: 93.7299% ( 63) 00:10:15.420 9787.472 - 9844.709: 94.1255% ( 59) 00:10:15.420 9844.709 - 9901.946: 94.4810% ( 53) 00:10:15.420 9901.946 - 9959.183: 94.8163% ( 50) 00:10:15.420 9959.183 - 10016.419: 95.1851% ( 55) 00:10:15.420 10016.419 - 10073.656: 95.5740% ( 58) 00:10:15.420 10073.656 - 10130.893: 95.9227% ( 52) 00:10:15.420 10130.893 - 10188.129: 96.1977% ( 41) 00:10:15.420 10188.129 - 10245.366: 96.3989% ( 30) 00:10:15.420 10245.366 - 10302.603: 96.5866% ( 28) 00:10:15.420 10302.603 - 10359.839: 96.7342% ( 22) 00:10:15.420 10359.839 - 10417.076: 96.8616% ( 19) 00:10:15.420 10417.076 - 10474.313: 96.9421% ( 12) 00:10:15.420 10474.313 - 10531.549: 97.0225% ( 12) 00:10:15.420 10531.549 - 10588.786: 97.0963% ( 11) 00:10:15.420 10588.786 - 10646.023: 97.1835% ( 13) 00:10:15.420 10646.023 - 10703.259: 97.2572% ( 11) 00:10:15.420 10703.259 - 10760.496: 97.3310% ( 11) 00:10:15.420 10760.496 - 10817.733: 97.4182% ( 13) 00:10:15.420 10817.733 - 10874.969: 97.4987% ( 12) 00:10:15.420 10874.969 - 10932.206: 97.5724% ( 11) 00:10:15.420 10932.206 - 10989.443: 97.6261% ( 8) 00:10:15.420 10989.443 - 11046.679: 97.6529% ( 4) 00:10:15.420 11046.679 - 11103.916: 97.6797% ( 4) 00:10:15.420 11103.916 - 11161.153: 97.7133% ( 5) 00:10:15.420 11161.153 - 11218.390: 97.7401% ( 4) 00:10:15.420 11218.390 - 11275.626: 97.7669% ( 4) 00:10:15.420 11275.626 - 11332.863: 97.8004% ( 5) 00:10:15.420 11332.863 - 11390.100: 97.8205% ( 3) 00:10:15.420 11390.100 - 11447.336: 97.8340% ( 2) 00:10:15.420 11447.336 - 11504.573: 97.8474% ( 2) 00:10:15.420 11504.573 - 11561.810: 97.8541% ( 1) 00:10:15.420 12305.886 - 12363.123: 97.8608% ( 1) 00:10:15.420 12363.123 - 12420.360: 97.8809% ( 3) 00:10:15.420 12420.360 - 12477.597: 97.8876% ( 1) 00:10:15.420 12477.597 - 12534.833: 97.9010% ( 2) 00:10:15.420 12534.833 - 12592.070: 97.9144% ( 2) 00:10:15.420 12592.070 - 12649.307: 97.9211% ( 1) 00:10:15.420 12649.307 - 12706.543: 97.9345% ( 2) 00:10:15.420 12706.543 - 12763.780: 97.9480% ( 2) 00:10:15.420 12763.780 - 12821.017: 97.9547% ( 1) 00:10:15.420 12821.017 - 12878.253: 97.9681% ( 2) 00:10:15.420 12878.253 - 12935.490: 97.9815% ( 2) 00:10:15.420 12935.490 - 12992.727: 97.9949% ( 2) 00:10:15.420 12992.727 - 13049.963: 98.0016% ( 1) 00:10:15.420 13049.963 - 13107.200: 98.0150% ( 2) 00:10:15.420 13107.200 - 13164.437: 98.0284% ( 2) 00:10:15.420 13164.437 - 13221.673: 98.0418% ( 2) 00:10:15.420 13221.673 - 13278.910: 98.0486% ( 1) 00:10:15.420 13278.910 - 13336.147: 98.0620% ( 2) 00:10:15.420 13336.147 - 13393.383: 98.0754% ( 2) 00:10:15.420 13393.383 - 13450.620: 98.0888% ( 2) 00:10:15.420 13450.620 - 13507.857: 98.1022% ( 2) 00:10:15.420 13507.857 - 13565.093: 98.1089% ( 1) 00:10:15.420 13565.093 - 13622.330: 98.1223% ( 2) 00:10:15.420 13622.330 - 13679.567: 98.1357% ( 2) 00:10:15.420 13679.567 - 13736.803: 98.1424% ( 1) 00:10:15.420 13736.803 - 13794.040: 98.1558% ( 2) 00:10:15.420 13794.040 - 13851.277: 98.1693% ( 2) 00:10:15.420 13851.277 - 13908.514: 98.1827% ( 2) 00:10:15.420 13908.514 - 13965.750: 98.1961% ( 2) 00:10:15.420 13965.750 - 14022.987: 98.2028% ( 1) 00:10:15.420 14022.987 - 14080.224: 98.2162% ( 2) 00:10:15.420 14080.224 - 14137.460: 98.2296% ( 2) 00:10:15.420 14137.460 - 14194.697: 98.2430% ( 2) 00:10:15.420 14251.934 - 14309.170: 98.2564% ( 2) 00:10:15.420 14309.170 - 14366.407: 98.2698% ( 2) 00:10:15.420 14366.407 - 14423.644: 98.2833% ( 2) 00:10:15.420 15224.957 - 15339.431: 98.3034% ( 3) 00:10:15.420 15339.431 - 15453.904: 98.3235% ( 3) 00:10:15.420 15453.904 - 15568.377: 98.3503% ( 4) 00:10:15.420 15568.377 - 15682.851: 98.3771% ( 4) 00:10:15.420 15682.851 - 15797.324: 98.4107% ( 5) 00:10:15.420 15797.324 - 15911.797: 98.4643% ( 8) 00:10:15.420 15911.797 - 16026.271: 98.5180% ( 8) 00:10:15.420 16026.271 - 16140.744: 98.5582% ( 6) 00:10:15.420 16140.744 - 16255.217: 98.6052% ( 7) 00:10:15.420 16255.217 - 16369.691: 98.6454% ( 6) 00:10:15.420 16369.691 - 16484.164: 98.6990% ( 8) 00:10:15.420 16484.164 - 16598.638: 98.7460% ( 7) 00:10:15.420 16598.638 - 16713.111: 98.7929% ( 7) 00:10:15.420 16713.111 - 16827.584: 98.8399% ( 7) 00:10:15.420 16827.584 - 16942.058: 98.8801% ( 6) 00:10:15.420 16942.058 - 17056.531: 98.9270% ( 7) 00:10:15.420 17056.531 - 17171.004: 98.9673% ( 6) 00:10:15.420 17171.004 - 17285.478: 98.9941% ( 4) 00:10:15.420 17285.478 - 17399.951: 99.0142% ( 3) 00:10:15.420 17399.951 - 17514.424: 99.0343% ( 3) 00:10:15.420 17514.424 - 17628.898: 99.0545% ( 3) 00:10:15.420 17628.898 - 17743.371: 99.0746% ( 3) 00:10:15.420 17743.371 - 17857.845: 99.0947% ( 3) 00:10:15.420 17857.845 - 17972.318: 99.1148% ( 3) 00:10:15.420 17972.318 - 18086.791: 99.1349% ( 3) 00:10:15.420 18086.791 - 18201.265: 99.1416% ( 1) 00:10:15.420 28847.287 - 28961.761: 99.1483% ( 1) 00:10:15.420 28961.761 - 29076.234: 99.1886% ( 6) 00:10:15.420 29076.234 - 29190.707: 99.2422% ( 8) 00:10:15.420 29190.707 - 29305.181: 99.2892% ( 7) 00:10:15.420 29305.181 - 29534.128: 99.3830% ( 14) 00:10:15.420 29534.128 - 29763.074: 99.4769% ( 14) 00:10:15.420 29763.074 - 29992.021: 99.5708% ( 14) 00:10:15.420 34113.062 - 34342.009: 99.6043% ( 5) 00:10:15.420 34342.009 - 34570.955: 99.6982% ( 14) 00:10:15.420 34570.955 - 34799.902: 99.7988% ( 15) 00:10:15.420 34799.902 - 35028.849: 99.8927% ( 14) 00:10:15.420 35028.849 - 35257.796: 99.9866% ( 14) 00:10:15.420 35257.796 - 35486.742: 100.0000% ( 2) 00:10:15.420 00:10:15.420 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:10:15.420 ============================================================================== 00:10:15.420 Range in us Cumulative IO count 00:10:15.420 4349.988 - 4378.606: 0.0267% ( 4) 00:10:15.420 4378.606 - 4407.224: 0.0467% ( 3) 00:10:15.420 4435.843 - 4464.461: 0.0534% ( 1) 00:10:15.420 4464.461 - 4493.079: 0.0668% ( 2) 00:10:15.420 4493.079 - 4521.698: 0.0868% ( 3) 00:10:15.420 4521.698 - 4550.316: 0.1002% ( 2) 00:10:15.420 4550.316 - 4578.934: 0.1202% ( 3) 00:10:15.420 4578.934 - 4607.553: 0.1469% ( 4) 00:10:15.420 4607.553 - 4636.171: 0.1603% ( 2) 00:10:15.420 4636.171 - 4664.790: 0.1736% ( 2) 00:10:15.420 4664.790 - 4693.408: 0.1936% ( 3) 00:10:15.420 4693.408 - 4722.026: 0.2137% ( 3) 00:10:15.420 4722.026 - 4750.645: 0.2270% ( 2) 00:10:15.420 4750.645 - 4779.263: 0.2471% ( 3) 00:10:15.420 4779.263 - 4807.881: 0.2604% ( 2) 00:10:15.420 4807.881 - 4836.500: 0.2738% ( 2) 00:10:15.420 4836.500 - 4865.118: 0.2871% ( 2) 00:10:15.420 4865.118 - 4893.736: 0.3005% ( 2) 00:10:15.420 4893.736 - 4922.355: 0.3138% ( 2) 00:10:15.420 4922.355 - 4950.973: 0.3272% ( 2) 00:10:15.420 4950.973 - 4979.591: 0.3405% ( 2) 00:10:15.420 4979.591 - 5008.210: 0.3606% ( 3) 00:10:15.420 5008.210 - 5036.828: 0.3739% ( 2) 00:10:15.420 5036.828 - 5065.446: 0.3873% ( 2) 00:10:15.420 5065.446 - 5094.065: 0.4073% ( 3) 00:10:15.420 5094.065 - 5122.683: 0.4207% ( 2) 00:10:15.420 5122.683 - 5151.301: 0.4274% ( 1) 00:10:15.420 6839.783 - 6868.402: 0.4340% ( 1) 00:10:15.420 6868.402 - 6897.020: 0.4741% ( 6) 00:10:15.420 6897.020 - 6925.638: 0.5008% ( 4) 00:10:15.420 6925.638 - 6954.257: 0.5142% ( 2) 00:10:15.420 6982.875 - 7011.493: 0.5275% ( 2) 00:10:15.420 7011.493 - 7040.112: 0.5475% ( 3) 00:10:15.420 7040.112 - 7068.730: 0.5676% ( 3) 00:10:15.420 7068.730 - 7097.348: 0.5809% ( 2) 00:10:15.421 7097.348 - 7125.967: 0.5943% ( 2) 00:10:15.421 7125.967 - 7154.585: 0.6010% ( 1) 00:10:15.421 7154.585 - 7183.203: 0.6143% ( 2) 00:10:15.421 7183.203 - 7211.822: 0.6277% ( 2) 00:10:15.421 7211.822 - 7240.440: 0.6544% ( 4) 00:10:15.421 7240.440 - 7269.059: 0.6878% ( 5) 00:10:15.421 7269.059 - 7297.677: 0.7479% ( 9) 00:10:15.421 7297.677 - 7326.295: 0.7946% ( 7) 00:10:15.421 7326.295 - 7383.532: 1.1285% ( 50) 00:10:15.421 7383.532 - 7440.769: 1.5825% ( 68) 00:10:15.421 7440.769 - 7498.005: 2.3237% ( 111) 00:10:15.421 7498.005 - 7555.242: 3.3520% ( 154) 00:10:15.421 7555.242 - 7612.479: 5.0414% ( 253) 00:10:15.421 7612.479 - 7669.715: 7.6522% ( 391) 00:10:15.421 7669.715 - 7726.952: 10.9241% ( 490) 00:10:15.421 7726.952 - 7784.189: 15.2377% ( 646) 00:10:15.421 7784.189 - 7841.425: 19.7850% ( 681) 00:10:15.421 7841.425 - 7898.662: 24.4391% ( 697) 00:10:15.421 7898.662 - 7955.899: 29.3336% ( 733) 00:10:15.421 7955.899 - 8013.135: 34.2481% ( 736) 00:10:15.421 8013.135 - 8070.372: 39.4164% ( 774) 00:10:15.421 8070.372 - 8127.609: 44.5580% ( 770) 00:10:15.421 8127.609 - 8184.845: 49.6394% ( 761) 00:10:15.421 8184.845 - 8242.082: 54.8210% ( 776) 00:10:15.421 8242.082 - 8299.319: 59.9292% ( 765) 00:10:15.421 8299.319 - 8356.555: 64.8170% ( 732) 00:10:15.421 8356.555 - 8413.792: 69.4378% ( 692) 00:10:15.421 8413.792 - 8471.029: 73.7113% ( 640) 00:10:15.421 8471.029 - 8528.266: 77.3771% ( 549) 00:10:15.421 8528.266 - 8585.502: 80.2150% ( 425) 00:10:15.421 8585.502 - 8642.739: 82.3518% ( 320) 00:10:15.421 8642.739 - 8699.976: 83.7540% ( 210) 00:10:15.421 8699.976 - 8757.212: 84.8291% ( 161) 00:10:15.421 8757.212 - 8814.449: 85.6504% ( 123) 00:10:15.421 8814.449 - 8871.686: 86.3448% ( 104) 00:10:15.421 8871.686 - 8928.922: 86.8990% ( 83) 00:10:15.421 8928.922 - 8986.159: 87.4199% ( 78) 00:10:15.421 8986.159 - 9043.396: 87.9607% ( 81) 00:10:15.421 9043.396 - 9100.632: 88.3747% ( 62) 00:10:15.421 9100.632 - 9157.869: 88.6886% ( 47) 00:10:15.421 9157.869 - 9215.106: 89.0759% ( 58) 00:10:15.421 9215.106 - 9272.342: 89.4832% ( 61) 00:10:15.421 9272.342 - 9329.579: 89.8705% ( 58) 00:10:15.421 9329.579 - 9386.816: 90.2978% ( 64) 00:10:15.421 9386.816 - 9444.052: 90.7252% ( 64) 00:10:15.421 9444.052 - 9501.289: 91.1325% ( 61) 00:10:15.421 9501.289 - 9558.526: 91.6132% ( 72) 00:10:15.421 9558.526 - 9615.762: 92.0740% ( 69) 00:10:15.421 9615.762 - 9672.999: 92.5347% ( 69) 00:10:15.421 9672.999 - 9730.236: 92.9955% ( 69) 00:10:15.421 9730.236 - 9787.472: 93.4228% ( 64) 00:10:15.421 9787.472 - 9844.709: 93.8502% ( 64) 00:10:15.421 9844.709 - 9901.946: 94.2575% ( 61) 00:10:15.421 9901.946 - 9959.183: 94.6114% ( 53) 00:10:15.421 9959.183 - 10016.419: 95.0254% ( 62) 00:10:15.421 10016.419 - 10073.656: 95.4127% ( 58) 00:10:15.421 10073.656 - 10130.893: 95.7732% ( 54) 00:10:15.421 10130.893 - 10188.129: 96.0403% ( 40) 00:10:15.421 10188.129 - 10245.366: 96.2540% ( 32) 00:10:15.421 10245.366 - 10302.603: 96.4610% ( 31) 00:10:15.421 10302.603 - 10359.839: 96.6079% ( 22) 00:10:15.421 10359.839 - 10417.076: 96.7548% ( 22) 00:10:15.421 10417.076 - 10474.313: 96.8616% ( 16) 00:10:15.421 10474.313 - 10531.549: 96.9551% ( 14) 00:10:15.421 10531.549 - 10588.786: 97.0286% ( 11) 00:10:15.421 10588.786 - 10646.023: 97.1087% ( 12) 00:10:15.421 10646.023 - 10703.259: 97.1755% ( 10) 00:10:15.421 10703.259 - 10760.496: 97.2556% ( 12) 00:10:15.421 10760.496 - 10817.733: 97.3357% ( 12) 00:10:15.421 10817.733 - 10874.969: 97.4225% ( 13) 00:10:15.421 10874.969 - 10932.206: 97.4693% ( 7) 00:10:15.421 10932.206 - 10989.443: 97.4893% ( 3) 00:10:15.421 10989.443 - 11046.679: 97.5160% ( 4) 00:10:15.421 11046.679 - 11103.916: 97.5494% ( 5) 00:10:15.421 11103.916 - 11161.153: 97.5628% ( 2) 00:10:15.421 11161.153 - 11218.390: 97.5761% ( 2) 00:10:15.421 11218.390 - 11275.626: 97.5895% ( 2) 00:10:15.421 11275.626 - 11332.863: 97.6028% ( 2) 00:10:15.421 11332.863 - 11390.100: 97.6162% ( 2) 00:10:15.421 11390.100 - 11447.336: 97.6295% ( 2) 00:10:15.421 11447.336 - 11504.573: 97.6429% ( 2) 00:10:15.421 11504.573 - 11561.810: 97.6896% ( 7) 00:10:15.421 11561.810 - 11619.046: 97.7163% ( 4) 00:10:15.421 11619.046 - 11676.283: 97.7364% ( 3) 00:10:15.421 11676.283 - 11733.520: 97.7564% ( 3) 00:10:15.421 11733.520 - 11790.756: 97.7831% ( 4) 00:10:15.421 11790.756 - 11847.993: 97.8098% ( 4) 00:10:15.421 11847.993 - 11905.230: 97.8299% ( 3) 00:10:15.421 11905.230 - 11962.466: 97.8566% ( 4) 00:10:15.421 11962.466 - 12019.703: 97.8766% ( 3) 00:10:15.421 12019.703 - 12076.940: 97.9033% ( 4) 00:10:15.421 12076.940 - 12134.176: 97.9300% ( 4) 00:10:15.421 12134.176 - 12191.413: 97.9501% ( 3) 00:10:15.421 12191.413 - 12248.650: 97.9768% ( 4) 00:10:15.421 12248.650 - 12305.886: 97.9968% ( 3) 00:10:15.421 12305.886 - 12363.123: 98.0235% ( 4) 00:10:15.421 12363.123 - 12420.360: 98.0502% ( 4) 00:10:15.421 12420.360 - 12477.597: 98.0702% ( 3) 00:10:15.421 12477.597 - 12534.833: 98.0903% ( 3) 00:10:15.421 12534.833 - 12592.070: 98.1036% ( 2) 00:10:15.421 12592.070 - 12649.307: 98.1170% ( 2) 00:10:15.421 12649.307 - 12706.543: 98.1237% ( 1) 00:10:15.421 12706.543 - 12763.780: 98.1370% ( 2) 00:10:15.421 12763.780 - 12821.017: 98.1437% ( 1) 00:10:15.421 12821.017 - 12878.253: 98.1571% ( 2) 00:10:15.421 12878.253 - 12935.490: 98.1704% ( 2) 00:10:15.421 12935.490 - 12992.727: 98.1838% ( 2) 00:10:15.421 12992.727 - 13049.963: 98.1904% ( 1) 00:10:15.421 13049.963 - 13107.200: 98.2038% ( 2) 00:10:15.421 13107.200 - 13164.437: 98.2171% ( 2) 00:10:15.421 13164.437 - 13221.673: 98.2305% ( 2) 00:10:15.421 13221.673 - 13278.910: 98.2439% ( 2) 00:10:15.421 13278.910 - 13336.147: 98.2505% ( 1) 00:10:15.421 13336.147 - 13393.383: 98.2639% ( 2) 00:10:15.421 13450.620 - 13507.857: 98.2772% ( 2) 00:10:15.421 13507.857 - 13565.093: 98.2906% ( 2) 00:10:15.421 15110.484 - 15224.957: 98.2973% ( 1) 00:10:15.421 15224.957 - 15339.431: 98.3307% ( 5) 00:10:15.421 15339.431 - 15453.904: 98.3440% ( 2) 00:10:15.421 15453.904 - 15568.377: 98.3640% ( 3) 00:10:15.421 15568.377 - 15682.851: 98.3908% ( 4) 00:10:15.421 15682.851 - 15797.324: 98.4041% ( 2) 00:10:15.421 15797.324 - 15911.797: 98.4308% ( 4) 00:10:15.421 15911.797 - 16026.271: 98.4509% ( 3) 00:10:15.421 16026.271 - 16140.744: 98.4709% ( 3) 00:10:15.421 16140.744 - 16255.217: 98.5043% ( 5) 00:10:15.421 16255.217 - 16369.691: 98.5443% ( 6) 00:10:15.421 16369.691 - 16484.164: 98.5911% ( 7) 00:10:15.421 16484.164 - 16598.638: 98.6378% ( 7) 00:10:15.421 16598.638 - 16713.111: 98.6846% ( 7) 00:10:15.421 16713.111 - 16827.584: 98.7246% ( 6) 00:10:15.421 16827.584 - 16942.058: 98.7780% ( 8) 00:10:15.421 16942.058 - 17056.531: 98.8248% ( 7) 00:10:15.421 17056.531 - 17171.004: 98.8649% ( 6) 00:10:15.421 17171.004 - 17285.478: 98.9116% ( 7) 00:10:15.421 17285.478 - 17399.951: 98.9517% ( 6) 00:10:15.421 17399.951 - 17514.424: 98.9984% ( 7) 00:10:15.421 17514.424 - 17628.898: 99.0318% ( 5) 00:10:15.421 17628.898 - 17743.371: 99.0585% ( 4) 00:10:15.421 17743.371 - 17857.845: 99.0785% ( 3) 00:10:15.421 17857.845 - 17972.318: 99.1052% ( 4) 00:10:15.421 17972.318 - 18086.791: 99.1319% ( 4) 00:10:15.421 18086.791 - 18201.265: 99.1453% ( 2) 00:10:15.421 23123.619 - 23238.093: 99.1587% ( 2) 00:10:15.421 23238.093 - 23352.566: 99.2054% ( 7) 00:10:15.421 23352.566 - 23467.039: 99.2521% ( 7) 00:10:15.421 23467.039 - 23581.513: 99.2989% ( 7) 00:10:15.421 23581.513 - 23695.986: 99.3523% ( 8) 00:10:15.421 23695.986 - 23810.459: 99.3990% ( 7) 00:10:15.421 23810.459 - 23924.933: 99.4458% ( 7) 00:10:15.421 23924.933 - 24039.406: 99.4925% ( 7) 00:10:15.421 24039.406 - 24153.879: 99.5459% ( 8) 00:10:15.421 24153.879 - 24268.353: 99.5726% ( 4) 00:10:15.421 28961.761 - 29076.234: 99.5927% ( 3) 00:10:15.421 29076.234 - 29190.707: 99.6394% ( 7) 00:10:15.421 29190.707 - 29305.181: 99.6795% ( 6) 00:10:15.421 29305.181 - 29534.128: 99.7730% ( 14) 00:10:15.421 29534.128 - 29763.074: 99.8665% ( 14) 00:10:15.421 29763.074 - 29992.021: 99.9599% ( 14) 00:10:15.421 29992.021 - 30220.968: 100.0000% ( 6) 00:10:15.421 00:10:15.421 18:28:15 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:10:16.802 Initializing NVMe Controllers 00:10:16.802 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:16.802 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:16.802 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:16.802 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:16.802 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:10:16.802 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:10:16.802 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:10:16.802 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:10:16.802 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:10:16.802 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:10:16.802 Initialization complete. Launching workers. 00:10:16.802 ======================================================== 00:10:16.802 Latency(us) 00:10:16.802 Device Information : IOPS MiB/s Average min max 00:10:16.802 PCIE (0000:00:10.0) NSID 1 from core 0: 9020.67 105.71 14209.89 9197.47 37563.45 00:10:16.802 PCIE (0000:00:11.0) NSID 1 from core 0: 9020.67 105.71 14200.55 9201.54 36688.41 00:10:16.802 PCIE (0000:00:13.0) NSID 1 from core 0: 9020.67 105.71 14191.82 8917.07 37728.26 00:10:16.802 PCIE (0000:00:12.0) NSID 1 from core 0: 9020.67 105.71 14183.07 8569.37 37210.25 00:10:16.802 PCIE (0000:00:12.0) NSID 2 from core 0: 9020.67 105.71 14174.29 7882.04 37353.67 00:10:16.802 PCIE (0000:00:12.0) NSID 3 from core 0: 9084.20 106.46 14067.11 7349.21 30022.50 00:10:16.802 ======================================================== 00:10:16.802 Total : 54187.54 635.01 14171.00 7349.21 37728.26 00:10:16.802 00:10:16.802 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:10:16.802 ================================================================================= 00:10:16.802 1.00000% : 9730.236us 00:10:16.803 10.00000% : 10531.549us 00:10:16.803 25.00000% : 11905.230us 00:10:16.803 50.00000% : 14080.224us 00:10:16.803 75.00000% : 15797.324us 00:10:16.803 90.00000% : 17972.318us 00:10:16.803 95.00000% : 19117.052us 00:10:16.803 98.00000% : 21063.099us 00:10:16.803 99.00000% : 28274.921us 00:10:16.803 99.50000% : 36631.476us 00:10:16.803 99.90000% : 37547.263us 00:10:16.803 99.99000% : 37776.210us 00:10:16.803 99.99900% : 37776.210us 00:10:16.803 99.99990% : 37776.210us 00:10:16.803 99.99999% : 37776.210us 00:10:16.803 00:10:16.803 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:10:16.803 ================================================================================= 00:10:16.803 1.00000% : 9787.472us 00:10:16.803 10.00000% : 10531.549us 00:10:16.803 25.00000% : 11619.046us 00:10:16.803 50.00000% : 14251.934us 00:10:16.803 75.00000% : 15797.324us 00:10:16.803 90.00000% : 18201.265us 00:10:16.803 95.00000% : 19117.052us 00:10:16.803 98.00000% : 21520.992us 00:10:16.803 99.00000% : 28045.974us 00:10:16.803 99.50000% : 35944.636us 00:10:16.803 99.90000% : 36631.476us 00:10:16.803 99.99000% : 36860.423us 00:10:16.803 99.99900% : 36860.423us 00:10:16.803 99.99990% : 36860.423us 00:10:16.803 99.99999% : 36860.423us 00:10:16.803 00:10:16.803 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:10:16.803 ================================================================================= 00:10:16.803 1.00000% : 9615.762us 00:10:16.803 10.00000% : 10417.076us 00:10:16.803 25.00000% : 11733.520us 00:10:16.803 50.00000% : 14194.697us 00:10:16.803 75.00000% : 15682.851us 00:10:16.803 90.00000% : 17972.318us 00:10:16.803 95.00000% : 18888.105us 00:10:16.803 98.00000% : 21177.572us 00:10:16.803 99.00000% : 29305.181us 00:10:16.803 99.50000% : 37089.369us 00:10:16.803 99.90000% : 37776.210us 00:10:16.803 99.99000% : 37776.210us 00:10:16.803 99.99900% : 37776.210us 00:10:16.803 99.99990% : 37776.210us 00:10:16.803 99.99999% : 37776.210us 00:10:16.803 00:10:16.803 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:10:16.803 ================================================================================= 00:10:16.803 1.00000% : 9615.762us 00:10:16.803 10.00000% : 10417.076us 00:10:16.803 25.00000% : 11733.520us 00:10:16.803 50.00000% : 14080.224us 00:10:16.803 75.00000% : 15911.797us 00:10:16.803 90.00000% : 17857.845us 00:10:16.803 95.00000% : 19231.525us 00:10:16.803 98.00000% : 22322.306us 00:10:16.803 99.00000% : 29076.234us 00:10:16.803 99.50000% : 36631.476us 00:10:16.803 99.90000% : 37089.369us 00:10:16.803 99.99000% : 37318.316us 00:10:16.803 99.99900% : 37318.316us 00:10:16.803 99.99990% : 37318.316us 00:10:16.803 99.99999% : 37318.316us 00:10:16.803 00:10:16.803 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:10:16.803 ================================================================================= 00:10:16.803 1.00000% : 9501.289us 00:10:16.803 10.00000% : 10531.549us 00:10:16.803 25.00000% : 11790.756us 00:10:16.803 50.00000% : 14080.224us 00:10:16.803 75.00000% : 15911.797us 00:10:16.803 90.00000% : 17628.898us 00:10:16.803 95.00000% : 19460.472us 00:10:16.803 98.00000% : 20490.732us 00:10:16.803 99.00000% : 29190.707us 00:10:16.803 99.50000% : 36631.476us 00:10:16.803 99.90000% : 37318.316us 00:10:16.803 99.99000% : 37547.263us 00:10:16.803 99.99900% : 37547.263us 00:10:16.803 99.99990% : 37547.263us 00:10:16.803 99.99999% : 37547.263us 00:10:16.803 00:10:16.803 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:10:16.803 ================================================================================= 00:10:16.803 1.00000% : 9615.762us 00:10:16.803 10.00000% : 10474.313us 00:10:16.803 25.00000% : 11905.230us 00:10:16.803 50.00000% : 14022.987us 00:10:16.803 75.00000% : 15797.324us 00:10:16.803 90.00000% : 17743.371us 00:10:16.803 95.00000% : 19460.472us 00:10:16.803 98.00000% : 20605.205us 00:10:16.803 99.00000% : 21520.992us 00:10:16.803 99.50000% : 29305.181us 00:10:16.803 99.90000% : 29992.021us 00:10:16.803 99.99000% : 30220.968us 00:10:16.803 99.99900% : 30220.968us 00:10:16.803 99.99990% : 30220.968us 00:10:16.803 99.99999% : 30220.968us 00:10:16.803 00:10:16.803 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:10:16.803 ============================================================================== 00:10:16.803 Range in us Cumulative IO count 00:10:16.803 9157.869 - 9215.106: 0.0330% ( 3) 00:10:16.803 9215.106 - 9272.342: 0.0660% ( 3) 00:10:16.803 9272.342 - 9329.579: 0.0990% ( 3) 00:10:16.803 9329.579 - 9386.816: 0.1210% ( 2) 00:10:16.803 9386.816 - 9444.052: 0.1761% ( 5) 00:10:16.803 9444.052 - 9501.289: 0.2971% ( 11) 00:10:16.803 9501.289 - 9558.526: 0.4181% ( 11) 00:10:16.803 9558.526 - 9615.762: 0.5832% ( 15) 00:10:16.803 9615.762 - 9672.999: 0.7923% ( 19) 00:10:16.803 9672.999 - 9730.236: 1.1774% ( 35) 00:10:16.803 9730.236 - 9787.472: 1.5515% ( 34) 00:10:16.803 9787.472 - 9844.709: 2.1017% ( 50) 00:10:16.803 9844.709 - 9901.946: 2.7509% ( 59) 00:10:16.803 9901.946 - 9959.183: 3.5761% ( 75) 00:10:16.803 9959.183 - 10016.419: 4.5555% ( 89) 00:10:16.803 10016.419 - 10073.656: 5.4357% ( 80) 00:10:16.803 10073.656 - 10130.893: 6.3600% ( 84) 00:10:16.803 10130.893 - 10188.129: 7.1413% ( 71) 00:10:16.803 10188.129 - 10245.366: 7.7135% ( 52) 00:10:16.803 10245.366 - 10302.603: 8.2526% ( 49) 00:10:16.803 10302.603 - 10359.839: 8.7808% ( 48) 00:10:16.803 10359.839 - 10417.076: 9.2430% ( 42) 00:10:16.803 10417.076 - 10474.313: 9.9032% ( 60) 00:10:16.803 10474.313 - 10531.549: 10.7724% ( 79) 00:10:16.803 10531.549 - 10588.786: 11.5427% ( 70) 00:10:16.803 10588.786 - 10646.023: 12.4230% ( 80) 00:10:16.803 10646.023 - 10703.259: 13.2592% ( 76) 00:10:16.803 10703.259 - 10760.496: 14.2496% ( 90) 00:10:16.803 10760.496 - 10817.733: 15.0968% ( 77) 00:10:16.803 10817.733 - 10874.969: 15.8011% ( 64) 00:10:16.803 10874.969 - 10932.206: 16.4723% ( 61) 00:10:16.803 10932.206 - 10989.443: 17.0885% ( 56) 00:10:16.803 10989.443 - 11046.679: 17.9027% ( 74) 00:10:16.803 11046.679 - 11103.916: 18.6950% ( 72) 00:10:16.803 11103.916 - 11161.153: 19.3332% ( 58) 00:10:16.803 11161.153 - 11218.390: 20.2135% ( 80) 00:10:16.803 11218.390 - 11275.626: 20.8847% ( 61) 00:10:16.803 11275.626 - 11332.863: 21.3908% ( 46) 00:10:16.803 11332.863 - 11390.100: 21.7650% ( 34) 00:10:16.803 11390.100 - 11447.336: 22.0401% ( 25) 00:10:16.803 11447.336 - 11504.573: 22.5572% ( 47) 00:10:16.803 11504.573 - 11561.810: 22.9423% ( 35) 00:10:16.803 11561.810 - 11619.046: 23.3385% ( 36) 00:10:16.803 11619.046 - 11676.283: 23.6796% ( 31) 00:10:16.803 11676.283 - 11733.520: 23.9767% ( 27) 00:10:16.803 11733.520 - 11790.756: 24.2848% ( 28) 00:10:16.803 11790.756 - 11847.993: 24.8239% ( 49) 00:10:16.803 11847.993 - 11905.230: 25.4952% ( 61) 00:10:16.803 11905.230 - 11962.466: 26.1004% ( 55) 00:10:16.803 11962.466 - 12019.703: 26.7826% ( 62) 00:10:16.803 12019.703 - 12076.940: 27.5308% ( 68) 00:10:16.803 12076.940 - 12134.176: 28.4221% ( 81) 00:10:16.803 12134.176 - 12191.413: 29.0493% ( 57) 00:10:16.803 12191.413 - 12248.650: 29.5775% ( 48) 00:10:16.803 12248.650 - 12305.886: 30.1607% ( 53) 00:10:16.803 12305.886 - 12363.123: 30.7879% ( 57) 00:10:16.803 12363.123 - 12420.360: 31.3490% ( 51) 00:10:16.803 12420.360 - 12477.597: 31.7672% ( 38) 00:10:16.803 12477.597 - 12534.833: 32.2623% ( 45) 00:10:16.803 12534.833 - 12592.070: 32.9996% ( 67) 00:10:16.803 12592.070 - 12649.307: 33.5827% ( 53) 00:10:16.803 12649.307 - 12706.543: 34.1879% ( 55) 00:10:16.803 12706.543 - 12763.780: 35.0792% ( 81) 00:10:16.803 12763.780 - 12821.017: 35.8165% ( 67) 00:10:16.803 12821.017 - 12878.253: 36.3666% ( 50) 00:10:16.803 12878.253 - 12935.490: 37.0379% ( 61) 00:10:16.803 12935.490 - 12992.727: 37.7311% ( 63) 00:10:16.803 12992.727 - 13049.963: 38.4573% ( 66) 00:10:16.803 13049.963 - 13107.200: 39.0185% ( 51) 00:10:16.803 13107.200 - 13164.437: 39.5687% ( 50) 00:10:16.803 13164.437 - 13221.673: 40.1408% ( 52) 00:10:16.803 13221.673 - 13278.910: 40.6690% ( 48) 00:10:16.803 13278.910 - 13336.147: 41.2852% ( 56) 00:10:16.803 13336.147 - 13393.383: 42.1105% ( 75) 00:10:16.803 13393.383 - 13450.620: 42.8257% ( 65) 00:10:16.803 13450.620 - 13507.857: 43.4969% ( 61) 00:10:16.803 13507.857 - 13565.093: 44.2232% ( 66) 00:10:16.803 13565.093 - 13622.330: 44.8504% ( 57) 00:10:16.803 13622.330 - 13679.567: 45.4555% ( 55) 00:10:16.803 13679.567 - 13736.803: 46.0938% ( 58) 00:10:16.803 13736.803 - 13794.040: 46.7870% ( 63) 00:10:16.803 13794.040 - 13851.277: 47.6012% ( 74) 00:10:16.803 13851.277 - 13908.514: 48.2945% ( 63) 00:10:16.803 13908.514 - 13965.750: 49.0207% ( 66) 00:10:16.803 13965.750 - 14022.987: 49.5268% ( 46) 00:10:16.803 14022.987 - 14080.224: 50.5062% ( 89) 00:10:16.803 14080.224 - 14137.460: 51.3314% ( 75) 00:10:16.803 14137.460 - 14194.697: 51.9256% ( 54) 00:10:16.803 14194.697 - 14251.934: 52.6739% ( 68) 00:10:16.803 14251.934 - 14309.170: 53.5431% ( 79) 00:10:16.803 14309.170 - 14366.407: 54.3024% ( 69) 00:10:16.803 14366.407 - 14423.644: 55.3257% ( 93) 00:10:16.803 14423.644 - 14480.880: 56.2390% ( 83) 00:10:16.803 14480.880 - 14538.117: 57.2513% ( 92) 00:10:16.803 14538.117 - 14595.354: 58.2416% ( 90) 00:10:16.803 14595.354 - 14652.590: 59.2540% ( 92) 00:10:16.803 14652.590 - 14767.064: 61.7188% ( 224) 00:10:16.803 14767.064 - 14881.537: 64.0295% ( 210) 00:10:16.803 14881.537 - 14996.010: 65.6360% ( 146) 00:10:16.803 14996.010 - 15110.484: 67.0555% ( 129) 00:10:16.803 15110.484 - 15224.957: 68.5079% ( 132) 00:10:16.803 15224.957 - 15339.431: 70.2025% ( 154) 00:10:16.803 15339.431 - 15453.904: 72.0951% ( 172) 00:10:16.803 15453.904 - 15568.377: 73.6906% ( 145) 00:10:16.803 15568.377 - 15682.851: 74.9230% ( 112) 00:10:16.803 15682.851 - 15797.324: 76.1774% ( 114) 00:10:16.803 15797.324 - 15911.797: 77.3327% ( 105) 00:10:16.803 15911.797 - 16026.271: 78.4551% ( 102) 00:10:16.803 16026.271 - 16140.744: 79.4674% ( 92) 00:10:16.803 16140.744 - 16255.217: 80.6558% ( 108) 00:10:16.803 16255.217 - 16369.691: 81.7342% ( 98) 00:10:16.803 16369.691 - 16484.164: 82.8785% ( 104) 00:10:16.803 16484.164 - 16598.638: 83.7478% ( 79) 00:10:16.803 16598.638 - 16713.111: 84.6501% ( 82) 00:10:16.803 16713.111 - 16827.584: 85.3763% ( 66) 00:10:16.803 16827.584 - 16942.058: 85.9045% ( 48) 00:10:16.803 16942.058 - 17056.531: 86.2896% ( 35) 00:10:16.803 17056.531 - 17171.004: 86.6087% ( 29) 00:10:16.803 17171.004 - 17285.478: 86.9938% ( 35) 00:10:16.803 17285.478 - 17399.951: 87.3019% ( 28) 00:10:16.803 17399.951 - 17514.424: 87.6540% ( 32) 00:10:16.803 17514.424 - 17628.898: 88.0832% ( 39) 00:10:16.804 17628.898 - 17743.371: 88.6554% ( 52) 00:10:16.804 17743.371 - 17857.845: 89.4696% ( 74) 00:10:16.804 17857.845 - 17972.318: 90.3829% ( 83) 00:10:16.804 17972.318 - 18086.791: 91.1642% ( 71) 00:10:16.804 18086.791 - 18201.265: 91.6703% ( 46) 00:10:16.804 18201.265 - 18315.738: 92.1655% ( 45) 00:10:16.804 18315.738 - 18430.211: 92.6386% ( 43) 00:10:16.804 18430.211 - 18544.685: 93.0568% ( 38) 00:10:16.804 18544.685 - 18659.158: 93.5409% ( 44) 00:10:16.804 18659.158 - 18773.631: 94.0581% ( 47) 00:10:16.804 18773.631 - 18888.105: 94.5753% ( 47) 00:10:16.804 18888.105 - 19002.578: 94.9274% ( 32) 00:10:16.804 19002.578 - 19117.052: 95.0704% ( 13) 00:10:16.804 19117.052 - 19231.525: 95.2465% ( 16) 00:10:16.804 19231.525 - 19345.998: 95.4996% ( 23) 00:10:16.804 19345.998 - 19460.472: 95.7416% ( 22) 00:10:16.804 19460.472 - 19574.945: 95.9287% ( 17) 00:10:16.804 19574.945 - 19689.418: 96.0717% ( 13) 00:10:16.804 19689.418 - 19803.892: 96.2918% ( 20) 00:10:16.804 19803.892 - 19918.365: 96.6439% ( 32) 00:10:16.804 19918.365 - 20032.838: 96.8200% ( 16) 00:10:16.804 20032.838 - 20147.312: 96.9740% ( 14) 00:10:16.804 20147.312 - 20261.785: 97.2271% ( 23) 00:10:16.804 20261.785 - 20376.259: 97.4032% ( 16) 00:10:16.804 20376.259 - 20490.732: 97.5352% ( 12) 00:10:16.804 20490.732 - 20605.205: 97.6452% ( 10) 00:10:16.804 20605.205 - 20719.679: 97.8323% ( 17) 00:10:16.804 20719.679 - 20834.152: 97.8763% ( 4) 00:10:16.804 20834.152 - 20948.625: 97.9423% ( 6) 00:10:16.804 20948.625 - 21063.099: 98.0084% ( 6) 00:10:16.804 21063.099 - 21177.572: 98.0964% ( 8) 00:10:16.804 21177.572 - 21292.045: 98.1844% ( 8) 00:10:16.804 21292.045 - 21406.519: 98.2504% ( 6) 00:10:16.804 21406.519 - 21520.992: 98.3495% ( 9) 00:10:16.804 21520.992 - 21635.466: 98.4155% ( 6) 00:10:16.804 21635.466 - 21749.939: 98.4705% ( 5) 00:10:16.804 21749.939 - 21864.412: 98.5585% ( 8) 00:10:16.804 21864.412 - 21978.886: 98.5915% ( 3) 00:10:16.804 27702.554 - 27817.027: 98.6026% ( 1) 00:10:16.804 27817.027 - 27931.500: 98.7566% ( 14) 00:10:16.804 27931.500 - 28045.974: 98.9327% ( 16) 00:10:16.804 28045.974 - 28160.447: 98.9657% ( 3) 00:10:16.804 28160.447 - 28274.921: 99.0207% ( 5) 00:10:16.804 28274.921 - 28389.394: 99.0427% ( 2) 00:10:16.804 28389.394 - 28503.867: 99.1087% ( 6) 00:10:16.804 28503.867 - 28618.341: 99.1747% ( 6) 00:10:16.804 28618.341 - 28732.814: 99.2298% ( 5) 00:10:16.804 28732.814 - 28847.287: 99.2848% ( 5) 00:10:16.804 28847.287 - 28961.761: 99.2958% ( 1) 00:10:16.804 35944.636 - 36173.583: 99.3618% ( 6) 00:10:16.804 36173.583 - 36402.529: 99.4828% ( 11) 00:10:16.804 36402.529 - 36631.476: 99.6149% ( 12) 00:10:16.804 36631.476 - 36860.423: 99.6479% ( 3) 00:10:16.804 36860.423 - 37089.369: 99.7249% ( 7) 00:10:16.804 37089.369 - 37318.316: 99.8680% ( 13) 00:10:16.804 37318.316 - 37547.263: 99.9890% ( 11) 00:10:16.804 37547.263 - 37776.210: 100.0000% ( 1) 00:10:16.804 00:10:16.804 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:10:16.804 ============================================================================== 00:10:16.804 Range in us Cumulative IO count 00:10:16.804 9157.869 - 9215.106: 0.0110% ( 1) 00:10:16.804 9329.579 - 9386.816: 0.0880% ( 7) 00:10:16.804 9386.816 - 9444.052: 0.1871% ( 9) 00:10:16.804 9444.052 - 9501.289: 0.3081% ( 11) 00:10:16.804 9501.289 - 9558.526: 0.4181% ( 10) 00:10:16.804 9558.526 - 9615.762: 0.5282% ( 10) 00:10:16.804 9615.762 - 9672.999: 0.6602% ( 12) 00:10:16.804 9672.999 - 9730.236: 0.9023% ( 22) 00:10:16.804 9730.236 - 9787.472: 1.2764% ( 34) 00:10:16.804 9787.472 - 9844.709: 1.7826% ( 46) 00:10:16.804 9844.709 - 9901.946: 2.4648% ( 62) 00:10:16.804 9901.946 - 9959.183: 3.1470% ( 62) 00:10:16.804 9959.183 - 10016.419: 3.8512% ( 64) 00:10:16.804 10016.419 - 10073.656: 4.7535% ( 82) 00:10:16.804 10073.656 - 10130.893: 5.6118% ( 78) 00:10:16.804 10130.893 - 10188.129: 6.5031% ( 81) 00:10:16.804 10188.129 - 10245.366: 7.3063% ( 73) 00:10:16.804 10245.366 - 10302.603: 7.9115% ( 55) 00:10:16.804 10302.603 - 10359.839: 8.6158% ( 64) 00:10:16.804 10359.839 - 10417.076: 9.2760% ( 60) 00:10:16.804 10417.076 - 10474.313: 9.9362% ( 60) 00:10:16.804 10474.313 - 10531.549: 10.6294% ( 63) 00:10:16.804 10531.549 - 10588.786: 11.4327% ( 73) 00:10:16.804 10588.786 - 10646.023: 12.3900% ( 87) 00:10:16.804 10646.023 - 10703.259: 13.2923% ( 82) 00:10:16.804 10703.259 - 10760.496: 14.1835% ( 81) 00:10:16.804 10760.496 - 10817.733: 15.0638% ( 80) 00:10:16.804 10817.733 - 10874.969: 16.1862% ( 102) 00:10:16.804 10874.969 - 10932.206: 17.2315% ( 95) 00:10:16.804 10932.206 - 10989.443: 18.0678% ( 76) 00:10:16.804 10989.443 - 11046.679: 19.1571% ( 99) 00:10:16.804 11046.679 - 11103.916: 20.0374% ( 80) 00:10:16.804 11103.916 - 11161.153: 21.1598% ( 102) 00:10:16.804 11161.153 - 11218.390: 22.2381% ( 98) 00:10:16.804 11218.390 - 11275.626: 23.0964% ( 78) 00:10:16.804 11275.626 - 11332.863: 23.7896% ( 63) 00:10:16.804 11332.863 - 11390.100: 24.2188% ( 39) 00:10:16.804 11390.100 - 11447.336: 24.5819% ( 33) 00:10:16.804 11447.336 - 11504.573: 24.8019% ( 20) 00:10:16.804 11504.573 - 11561.810: 24.9670% ( 15) 00:10:16.804 11561.810 - 11619.046: 25.1540% ( 17) 00:10:16.804 11619.046 - 11676.283: 25.3301% ( 16) 00:10:16.804 11676.283 - 11733.520: 25.5062% ( 16) 00:10:16.804 11733.520 - 11790.756: 25.6932% ( 17) 00:10:16.804 11790.756 - 11847.993: 26.0013% ( 28) 00:10:16.804 11847.993 - 11905.230: 26.3644% ( 33) 00:10:16.804 11905.230 - 11962.466: 26.8926% ( 48) 00:10:16.804 11962.466 - 12019.703: 27.6078% ( 65) 00:10:16.804 12019.703 - 12076.940: 28.2680% ( 60) 00:10:16.804 12076.940 - 12134.176: 28.8402% ( 52) 00:10:16.804 12134.176 - 12191.413: 29.5445% ( 64) 00:10:16.804 12191.413 - 12248.650: 30.3477% ( 73) 00:10:16.804 12248.650 - 12305.886: 30.9749% ( 57) 00:10:16.804 12305.886 - 12363.123: 31.4371% ( 42) 00:10:16.804 12363.123 - 12420.360: 31.8442% ( 37) 00:10:16.804 12420.360 - 12477.597: 32.1523% ( 28) 00:10:16.804 12477.597 - 12534.833: 32.4274% ( 25) 00:10:16.804 12534.833 - 12592.070: 32.7245% ( 27) 00:10:16.804 12592.070 - 12649.307: 33.0546% ( 30) 00:10:16.804 12649.307 - 12706.543: 33.5827% ( 48) 00:10:16.804 12706.543 - 12763.780: 34.1989% ( 56) 00:10:16.804 12763.780 - 12821.017: 35.0572% ( 78) 00:10:16.804 12821.017 - 12878.253: 35.9815% ( 84) 00:10:16.804 12878.253 - 12935.490: 36.8068% ( 75) 00:10:16.804 12935.490 - 12992.727: 37.5880% ( 71) 00:10:16.804 12992.727 - 13049.963: 38.5783% ( 90) 00:10:16.804 13049.963 - 13107.200: 39.3926% ( 74) 00:10:16.804 13107.200 - 13164.437: 40.2949% ( 82) 00:10:16.804 13164.437 - 13221.673: 41.0541% ( 69) 00:10:16.804 13221.673 - 13278.910: 41.7364% ( 62) 00:10:16.804 13278.910 - 13336.147: 42.4186% ( 62) 00:10:16.804 13336.147 - 13393.383: 43.1008% ( 62) 00:10:16.804 13393.383 - 13450.620: 43.6180% ( 47) 00:10:16.804 13450.620 - 13507.857: 44.0691% ( 41) 00:10:16.804 13507.857 - 13565.093: 44.3992% ( 30) 00:10:16.804 13565.093 - 13622.330: 44.8393% ( 40) 00:10:16.804 13622.330 - 13679.567: 45.3235% ( 44) 00:10:16.804 13679.567 - 13736.803: 45.7306% ( 37) 00:10:16.804 13736.803 - 13794.040: 46.2698% ( 49) 00:10:16.804 13794.040 - 13851.277: 46.6549% ( 35) 00:10:16.804 13851.277 - 13908.514: 47.0290% ( 34) 00:10:16.804 13908.514 - 13965.750: 47.4582% ( 39) 00:10:16.804 13965.750 - 14022.987: 47.9093% ( 41) 00:10:16.804 14022.987 - 14080.224: 48.3715% ( 42) 00:10:16.804 14080.224 - 14137.460: 48.9437% ( 52) 00:10:16.804 14137.460 - 14194.697: 49.6809% ( 67) 00:10:16.804 14194.697 - 14251.934: 50.3851% ( 64) 00:10:16.804 14251.934 - 14309.170: 51.3314% ( 86) 00:10:16.804 14309.170 - 14366.407: 52.2337% ( 82) 00:10:16.804 14366.407 - 14423.644: 53.1360% ( 82) 00:10:16.804 14423.644 - 14480.880: 54.3904% ( 114) 00:10:16.804 14480.880 - 14538.117: 55.8209% ( 130) 00:10:16.804 14538.117 - 14595.354: 57.2183% ( 127) 00:10:16.804 14595.354 - 14652.590: 58.6048% ( 126) 00:10:16.804 14652.590 - 14767.064: 61.3886% ( 253) 00:10:16.804 14767.064 - 14881.537: 64.4586% ( 279) 00:10:16.804 14881.537 - 14996.010: 66.8244% ( 215) 00:10:16.804 14996.010 - 15110.484: 68.9371% ( 192) 00:10:16.804 15110.484 - 15224.957: 70.4555% ( 138) 00:10:16.804 15224.957 - 15339.431: 71.5559% ( 100) 00:10:16.804 15339.431 - 15453.904: 72.5242% ( 88) 00:10:16.804 15453.904 - 15568.377: 73.4375% ( 83) 00:10:16.804 15568.377 - 15682.851: 74.5158% ( 98) 00:10:16.804 15682.851 - 15797.324: 75.6602% ( 104) 00:10:16.804 15797.324 - 15911.797: 77.0907% ( 130) 00:10:16.804 15911.797 - 16026.271: 78.8402% ( 159) 00:10:16.804 16026.271 - 16140.744: 80.3147% ( 134) 00:10:16.804 16140.744 - 16255.217: 81.3270% ( 92) 00:10:16.804 16255.217 - 16369.691: 82.1413% ( 74) 00:10:16.804 16369.691 - 16484.164: 82.7245% ( 53) 00:10:16.804 16484.164 - 16598.638: 83.3187% ( 54) 00:10:16.804 16598.638 - 16713.111: 83.9239% ( 55) 00:10:16.804 16713.111 - 16827.584: 84.5621% ( 58) 00:10:16.804 16827.584 - 16942.058: 85.1673% ( 55) 00:10:16.804 16942.058 - 17056.531: 85.7394% ( 52) 00:10:16.804 17056.531 - 17171.004: 86.3446% ( 55) 00:10:16.804 17171.004 - 17285.478: 86.9388% ( 54) 00:10:16.804 17285.478 - 17399.951: 87.3460% ( 37) 00:10:16.804 17399.951 - 17514.424: 87.7421% ( 36) 00:10:16.804 17514.424 - 17628.898: 88.2812% ( 49) 00:10:16.804 17628.898 - 17743.371: 88.6334% ( 32) 00:10:16.804 17743.371 - 17857.845: 88.9855% ( 32) 00:10:16.804 17857.845 - 17972.318: 89.3046% ( 29) 00:10:16.804 17972.318 - 18086.791: 89.6347% ( 30) 00:10:16.804 18086.791 - 18201.265: 90.1959% ( 51) 00:10:16.804 18201.265 - 18315.738: 90.8891% ( 63) 00:10:16.804 18315.738 - 18430.211: 91.5823% ( 63) 00:10:16.804 18430.211 - 18544.685: 92.3856% ( 73) 00:10:16.804 18544.685 - 18659.158: 93.1118% ( 66) 00:10:16.804 18659.158 - 18773.631: 93.8270% ( 65) 00:10:16.804 18773.631 - 18888.105: 94.4762% ( 59) 00:10:16.804 18888.105 - 19002.578: 94.8724% ( 36) 00:10:16.804 19002.578 - 19117.052: 95.3895% ( 47) 00:10:16.804 19117.052 - 19231.525: 95.9727% ( 53) 00:10:16.804 19231.525 - 19345.998: 96.5119% ( 49) 00:10:16.804 19345.998 - 19460.472: 96.6989% ( 17) 00:10:16.804 19460.472 - 19574.945: 96.8200% ( 11) 00:10:16.804 19574.945 - 19689.418: 96.9190% ( 9) 00:10:16.804 19689.418 - 19803.892: 97.0180% ( 9) 00:10:16.804 19803.892 - 19918.365: 97.0841% ( 6) 00:10:16.804 19918.365 - 20032.838: 97.1501% ( 6) 00:10:16.804 20032.838 - 20147.312: 97.1831% ( 3) 00:10:16.804 20147.312 - 20261.785: 97.1941% ( 1) 00:10:16.804 20376.259 - 20490.732: 97.2711% ( 7) 00:10:16.804 20490.732 - 20605.205: 97.3371% ( 6) 00:10:16.804 20605.205 - 20719.679: 97.3922% ( 5) 00:10:16.804 20719.679 - 20834.152: 97.4582% ( 6) 00:10:16.804 20834.152 - 20948.625: 97.5242% ( 6) 00:10:16.804 20948.625 - 21063.099: 97.6452% ( 11) 00:10:16.804 21063.099 - 21177.572: 97.7443% ( 9) 00:10:16.804 21177.572 - 21292.045: 97.8763% ( 12) 00:10:16.804 21292.045 - 21406.519: 97.9974% ( 11) 00:10:16.804 21406.519 - 21520.992: 98.0964% ( 9) 00:10:16.804 21520.992 - 21635.466: 98.2504% ( 14) 00:10:16.804 21635.466 - 21749.939: 98.3385% ( 8) 00:10:16.804 21749.939 - 21864.412: 98.3825% ( 4) 00:10:16.804 21864.412 - 21978.886: 98.4265% ( 4) 00:10:16.804 21978.886 - 22093.359: 98.4485% ( 2) 00:10:16.804 22093.359 - 22207.832: 98.4815% ( 3) 00:10:16.804 22207.832 - 22322.306: 98.5255% ( 4) 00:10:16.805 22322.306 - 22436.779: 98.5585% ( 3) 00:10:16.805 22436.779 - 22551.252: 98.5915% ( 3) 00:10:16.805 27359.134 - 27473.607: 98.6686% ( 7) 00:10:16.805 27473.607 - 27588.080: 98.7456% ( 7) 00:10:16.805 27588.080 - 27702.554: 98.8226% ( 7) 00:10:16.805 27702.554 - 27817.027: 98.8996% ( 7) 00:10:16.805 27817.027 - 27931.500: 98.9657% ( 6) 00:10:16.805 27931.500 - 28045.974: 99.0427% ( 7) 00:10:16.805 28045.974 - 28160.447: 99.1197% ( 7) 00:10:16.805 28160.447 - 28274.921: 99.1857% ( 6) 00:10:16.805 28274.921 - 28389.394: 99.2738% ( 8) 00:10:16.805 28389.394 - 28503.867: 99.2958% ( 2) 00:10:16.805 35486.742 - 35715.689: 99.4168% ( 11) 00:10:16.805 35715.689 - 35944.636: 99.5489% ( 12) 00:10:16.805 35944.636 - 36173.583: 99.6919% ( 13) 00:10:16.805 36173.583 - 36402.529: 99.8129% ( 11) 00:10:16.805 36402.529 - 36631.476: 99.9560% ( 13) 00:10:16.805 36631.476 - 36860.423: 100.0000% ( 4) 00:10:16.805 00:10:16.805 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:10:16.805 ============================================================================== 00:10:16.805 Range in us Cumulative IO count 00:10:16.805 8871.686 - 8928.922: 0.0110% ( 1) 00:10:16.805 8928.922 - 8986.159: 0.0550% ( 4) 00:10:16.805 8986.159 - 9043.396: 0.1210% ( 6) 00:10:16.805 9043.396 - 9100.632: 0.1981% ( 7) 00:10:16.805 9100.632 - 9157.869: 0.2971% ( 9) 00:10:16.805 9157.869 - 9215.106: 0.4511% ( 14) 00:10:16.805 9215.106 - 9272.342: 0.4952% ( 4) 00:10:16.805 9272.342 - 9329.579: 0.5502% ( 5) 00:10:16.805 9329.579 - 9386.816: 0.5722% ( 2) 00:10:16.805 9386.816 - 9444.052: 0.6272% ( 5) 00:10:16.805 9444.052 - 9501.289: 0.7482% ( 11) 00:10:16.805 9501.289 - 9558.526: 0.9133% ( 15) 00:10:16.805 9558.526 - 9615.762: 1.0893% ( 16) 00:10:16.805 9615.762 - 9672.999: 1.3094% ( 20) 00:10:16.805 9672.999 - 9730.236: 1.4965% ( 17) 00:10:16.805 9730.236 - 9787.472: 1.7936% ( 27) 00:10:16.805 9787.472 - 9844.709: 2.1897% ( 36) 00:10:16.805 9844.709 - 9901.946: 2.7069% ( 47) 00:10:16.805 9901.946 - 9959.183: 3.3781% ( 61) 00:10:16.805 9959.183 - 10016.419: 3.9723% ( 54) 00:10:16.805 10016.419 - 10073.656: 4.7425% ( 70) 00:10:16.805 10073.656 - 10130.893: 5.7658% ( 93) 00:10:16.805 10130.893 - 10188.129: 6.8442% ( 98) 00:10:16.805 10188.129 - 10245.366: 7.9555% ( 101) 00:10:16.805 10245.366 - 10302.603: 8.6818% ( 66) 00:10:16.805 10302.603 - 10359.839: 9.4520% ( 70) 00:10:16.805 10359.839 - 10417.076: 10.4093% ( 87) 00:10:16.805 10417.076 - 10474.313: 10.8935% ( 44) 00:10:16.805 10474.313 - 10531.549: 11.3886% ( 45) 00:10:16.805 10531.549 - 10588.786: 11.7848% ( 36) 00:10:16.805 10588.786 - 10646.023: 12.5770% ( 72) 00:10:16.805 10646.023 - 10703.259: 13.5563% ( 89) 00:10:16.805 10703.259 - 10760.496: 14.4366% ( 80) 00:10:16.805 10760.496 - 10817.733: 15.8121% ( 125) 00:10:16.805 10817.733 - 10874.969: 17.1325% ( 120) 00:10:16.805 10874.969 - 10932.206: 18.0128% ( 80) 00:10:16.805 10932.206 - 10989.443: 18.9040% ( 81) 00:10:16.805 10989.443 - 11046.679: 19.9714% ( 97) 00:10:16.805 11046.679 - 11103.916: 20.6866% ( 65) 00:10:16.805 11103.916 - 11161.153: 21.1818% ( 45) 00:10:16.805 11161.153 - 11218.390: 21.8530% ( 61) 00:10:16.805 11218.390 - 11275.626: 22.4802% ( 57) 00:10:16.805 11275.626 - 11332.863: 22.8653% ( 35) 00:10:16.805 11332.863 - 11390.100: 23.2835% ( 38) 00:10:16.805 11390.100 - 11447.336: 23.7126% ( 39) 00:10:16.805 11447.336 - 11504.573: 23.9987% ( 26) 00:10:16.805 11504.573 - 11561.810: 24.3178% ( 29) 00:10:16.805 11561.810 - 11619.046: 24.5489% ( 21) 00:10:16.805 11619.046 - 11676.283: 24.7799% ( 21) 00:10:16.805 11676.283 - 11733.520: 25.0880% ( 28) 00:10:16.805 11733.520 - 11790.756: 25.4842% ( 36) 00:10:16.805 11790.756 - 11847.993: 25.8693% ( 35) 00:10:16.805 11847.993 - 11905.230: 26.4855% ( 56) 00:10:16.805 11905.230 - 11962.466: 27.1677% ( 62) 00:10:16.805 11962.466 - 12019.703: 27.8059% ( 58) 00:10:16.805 12019.703 - 12076.940: 28.5431% ( 67) 00:10:16.805 12076.940 - 12134.176: 29.1153% ( 52) 00:10:16.805 12134.176 - 12191.413: 29.5224% ( 37) 00:10:16.805 12191.413 - 12248.650: 29.9076% ( 35) 00:10:16.805 12248.650 - 12305.886: 30.1717% ( 24) 00:10:16.805 12305.886 - 12363.123: 30.4357% ( 24) 00:10:16.805 12363.123 - 12420.360: 30.8429% ( 37) 00:10:16.805 12420.360 - 12477.597: 31.0960% ( 23) 00:10:16.805 12477.597 - 12534.833: 31.3820% ( 26) 00:10:16.805 12534.833 - 12592.070: 31.7782% ( 36) 00:10:16.805 12592.070 - 12649.307: 32.3944% ( 56) 00:10:16.805 12649.307 - 12706.543: 33.0216% ( 57) 00:10:16.805 12706.543 - 12763.780: 33.8578% ( 76) 00:10:16.805 12763.780 - 12821.017: 34.6831% ( 75) 00:10:16.805 12821.017 - 12878.253: 35.7174% ( 94) 00:10:16.805 12878.253 - 12935.490: 36.6967% ( 89) 00:10:16.805 12935.490 - 12992.727: 37.4670% ( 70) 00:10:16.805 12992.727 - 13049.963: 38.1932% ( 66) 00:10:16.805 13049.963 - 13107.200: 38.7544% ( 51) 00:10:16.805 13107.200 - 13164.437: 39.4256% ( 61) 00:10:16.805 13164.437 - 13221.673: 40.0308% ( 55) 00:10:16.805 13221.673 - 13278.910: 40.6030% ( 52) 00:10:16.805 13278.910 - 13336.147: 41.1422% ( 49) 00:10:16.805 13336.147 - 13393.383: 41.7254% ( 53) 00:10:16.805 13393.383 - 13450.620: 42.3526% ( 57) 00:10:16.805 13450.620 - 13507.857: 42.9137% ( 51) 00:10:16.805 13507.857 - 13565.093: 43.6180% ( 64) 00:10:16.805 13565.093 - 13622.330: 44.2562% ( 58) 00:10:16.805 13622.330 - 13679.567: 44.7623% ( 46) 00:10:16.805 13679.567 - 13736.803: 45.2025% ( 40) 00:10:16.805 13736.803 - 13794.040: 45.6096% ( 37) 00:10:16.805 13794.040 - 13851.277: 46.0057% ( 36) 00:10:16.805 13851.277 - 13908.514: 46.4789% ( 43) 00:10:16.805 13908.514 - 13965.750: 47.0290% ( 50) 00:10:16.805 13965.750 - 14022.987: 47.4582% ( 39) 00:10:16.805 14022.987 - 14080.224: 48.1074% ( 59) 00:10:16.805 14080.224 - 14137.460: 49.0977% ( 90) 00:10:16.805 14137.460 - 14194.697: 50.3631% ( 115) 00:10:16.805 14194.697 - 14251.934: 51.6285% ( 115) 00:10:16.805 14251.934 - 14309.170: 52.8279% ( 109) 00:10:16.805 14309.170 - 14366.407: 54.1263% ( 118) 00:10:16.805 14366.407 - 14423.644: 55.7218% ( 145) 00:10:16.805 14423.644 - 14480.880: 56.9212% ( 109) 00:10:16.805 14480.880 - 14538.117: 58.5167% ( 145) 00:10:16.805 14538.117 - 14595.354: 59.9032% ( 126) 00:10:16.805 14595.354 - 14652.590: 61.0475% ( 104) 00:10:16.805 14652.590 - 14767.064: 63.5563% ( 228) 00:10:16.805 14767.064 - 14881.537: 65.3829% ( 166) 00:10:16.805 14881.537 - 14996.010: 67.0885% ( 155) 00:10:16.805 14996.010 - 15110.484: 68.5960% ( 137) 00:10:16.805 15110.484 - 15224.957: 69.8614% ( 115) 00:10:16.805 15224.957 - 15339.431: 71.3578% ( 136) 00:10:16.805 15339.431 - 15453.904: 72.8213% ( 133) 00:10:16.805 15453.904 - 15568.377: 74.0317% ( 110) 00:10:16.805 15568.377 - 15682.851: 75.0440% ( 92) 00:10:16.805 15682.851 - 15797.324: 76.2874% ( 113) 00:10:16.805 15797.324 - 15911.797: 77.3988% ( 101) 00:10:16.805 15911.797 - 16026.271: 78.4991% ( 100) 00:10:16.805 16026.271 - 16140.744: 79.7205% ( 111) 00:10:16.805 16140.744 - 16255.217: 80.6118% ( 81) 00:10:16.805 16255.217 - 16369.691: 81.2940% ( 62) 00:10:16.805 16369.691 - 16484.164: 82.2623% ( 88) 00:10:16.805 16484.164 - 16598.638: 82.9115% ( 59) 00:10:16.805 16598.638 - 16713.111: 83.5277% ( 56) 00:10:16.805 16713.111 - 16827.584: 84.3090% ( 71) 00:10:16.805 16827.584 - 16942.058: 84.9142% ( 55) 00:10:16.805 16942.058 - 17056.531: 85.3543% ( 40) 00:10:16.805 17056.531 - 17171.004: 85.8935% ( 49) 00:10:16.805 17171.004 - 17285.478: 86.6417% ( 68) 00:10:16.805 17285.478 - 17399.951: 87.1369% ( 45) 00:10:16.805 17399.951 - 17514.424: 87.8631% ( 66) 00:10:16.805 17514.424 - 17628.898: 88.8094% ( 86) 00:10:16.805 17628.898 - 17743.371: 89.4696% ( 60) 00:10:16.805 17743.371 - 17857.845: 89.9208% ( 41) 00:10:16.805 17857.845 - 17972.318: 90.3939% ( 43) 00:10:16.805 17972.318 - 18086.791: 91.4723% ( 98) 00:10:16.805 18086.791 - 18201.265: 92.1765% ( 64) 00:10:16.805 18201.265 - 18315.738: 92.7817% ( 55) 00:10:16.805 18315.738 - 18430.211: 93.4639% ( 62) 00:10:16.805 18430.211 - 18544.685: 94.0471% ( 53) 00:10:16.805 18544.685 - 18659.158: 94.4982% ( 41) 00:10:16.805 18659.158 - 18773.631: 94.9604% ( 42) 00:10:16.805 18773.631 - 18888.105: 95.4555% ( 45) 00:10:16.805 18888.105 - 19002.578: 95.7526% ( 27) 00:10:16.805 19002.578 - 19117.052: 95.9617% ( 19) 00:10:16.805 19117.052 - 19231.525: 96.1708% ( 19) 00:10:16.805 19231.525 - 19345.998: 96.3468% ( 16) 00:10:16.805 19345.998 - 19460.472: 96.4679% ( 11) 00:10:16.805 19460.472 - 19574.945: 96.5889% ( 11) 00:10:16.805 19574.945 - 19689.418: 96.7650% ( 16) 00:10:16.805 19689.418 - 19803.892: 96.9850% ( 20) 00:10:16.805 19803.892 - 19918.365: 97.0731% ( 8) 00:10:16.805 19918.365 - 20032.838: 97.1171% ( 4) 00:10:16.805 20032.838 - 20147.312: 97.1611% ( 4) 00:10:16.805 20147.312 - 20261.785: 97.2161% ( 5) 00:10:16.805 20261.785 - 20376.259: 97.2711% ( 5) 00:10:16.805 20376.259 - 20490.732: 97.3151% ( 4) 00:10:16.805 20490.732 - 20605.205: 97.3702% ( 5) 00:10:16.805 20605.205 - 20719.679: 97.5352% ( 15) 00:10:16.805 20719.679 - 20834.152: 97.7223% ( 17) 00:10:16.805 20834.152 - 20948.625: 97.9093% ( 17) 00:10:16.805 20948.625 - 21063.099: 97.9864% ( 7) 00:10:16.805 21063.099 - 21177.572: 98.0194% ( 3) 00:10:16.805 21177.572 - 21292.045: 98.0634% ( 4) 00:10:16.805 21292.045 - 21406.519: 98.0964% ( 3) 00:10:16.805 21406.519 - 21520.992: 98.1404% ( 4) 00:10:16.805 21520.992 - 21635.466: 98.1954% ( 5) 00:10:16.805 21635.466 - 21749.939: 98.2174% ( 2) 00:10:16.805 21749.939 - 21864.412: 98.2614% ( 4) 00:10:16.805 21864.412 - 21978.886: 98.3165% ( 5) 00:10:16.805 21978.886 - 22093.359: 98.4155% ( 9) 00:10:16.805 22093.359 - 22207.832: 98.5365% ( 11) 00:10:16.805 22207.832 - 22322.306: 98.5805% ( 4) 00:10:16.805 22322.306 - 22436.779: 98.5915% ( 1) 00:10:16.805 28618.341 - 28732.814: 98.6356% ( 4) 00:10:16.805 28732.814 - 28847.287: 98.7016% ( 6) 00:10:16.805 28847.287 - 28961.761: 98.7786% ( 7) 00:10:16.805 28961.761 - 29076.234: 98.8556% ( 7) 00:10:16.805 29076.234 - 29190.707: 98.9327% ( 7) 00:10:16.805 29190.707 - 29305.181: 99.0097% ( 7) 00:10:16.805 29305.181 - 29534.128: 99.1637% ( 14) 00:10:16.805 29534.128 - 29763.074: 99.2958% ( 12) 00:10:16.805 36631.476 - 36860.423: 99.4388% ( 13) 00:10:16.805 36860.423 - 37089.369: 99.5819% ( 13) 00:10:16.805 37089.369 - 37318.316: 99.7359% ( 14) 00:10:16.805 37318.316 - 37547.263: 99.8790% ( 13) 00:10:16.805 37547.263 - 37776.210: 100.0000% ( 11) 00:10:16.805 00:10:16.805 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:10:16.805 ============================================================================== 00:10:16.805 Range in us Cumulative IO count 00:10:16.805 8528.266 - 8585.502: 0.0110% ( 1) 00:10:16.805 8585.502 - 8642.739: 0.0440% ( 3) 00:10:16.805 8642.739 - 8699.976: 0.0880% ( 4) 00:10:16.805 8699.976 - 8757.212: 0.1871% ( 9) 00:10:16.805 8757.212 - 8814.449: 0.3741% ( 17) 00:10:16.806 8814.449 - 8871.686: 0.4952% ( 11) 00:10:16.806 8871.686 - 8928.922: 0.5392% ( 4) 00:10:16.806 8928.922 - 8986.159: 0.5612% ( 2) 00:10:16.806 8986.159 - 9043.396: 0.5942% ( 3) 00:10:16.806 9043.396 - 9100.632: 0.6162% ( 2) 00:10:16.806 9100.632 - 9157.869: 0.6382% ( 2) 00:10:16.806 9157.869 - 9215.106: 0.6822% ( 4) 00:10:16.806 9215.106 - 9272.342: 0.7042% ( 2) 00:10:16.806 9272.342 - 9329.579: 0.7262% ( 2) 00:10:16.806 9329.579 - 9386.816: 0.7372% ( 1) 00:10:16.806 9386.816 - 9444.052: 0.7482% ( 1) 00:10:16.806 9444.052 - 9501.289: 0.7923% ( 4) 00:10:16.806 9501.289 - 9558.526: 0.8693% ( 7) 00:10:16.806 9558.526 - 9615.762: 1.0453% ( 16) 00:10:16.806 9615.762 - 9672.999: 1.3424% ( 27) 00:10:16.806 9672.999 - 9730.236: 1.6285% ( 26) 00:10:16.806 9730.236 - 9787.472: 1.9586% ( 30) 00:10:16.806 9787.472 - 9844.709: 2.4318% ( 43) 00:10:16.806 9844.709 - 9901.946: 2.9269% ( 45) 00:10:16.806 9901.946 - 9959.183: 3.4441% ( 47) 00:10:16.806 9959.183 - 10016.419: 4.2254% ( 71) 00:10:16.806 10016.419 - 10073.656: 4.8636% ( 58) 00:10:16.806 10073.656 - 10130.893: 5.6668% ( 73) 00:10:16.806 10130.893 - 10188.129: 6.5361% ( 79) 00:10:16.806 10188.129 - 10245.366: 7.3834% ( 77) 00:10:16.806 10245.366 - 10302.603: 8.2967% ( 83) 00:10:16.806 10302.603 - 10359.839: 9.2430% ( 86) 00:10:16.806 10359.839 - 10417.076: 10.0022% ( 69) 00:10:16.806 10417.076 - 10474.313: 11.0475% ( 95) 00:10:16.806 10474.313 - 10531.549: 12.0379% ( 90) 00:10:16.806 10531.549 - 10588.786: 13.0502% ( 92) 00:10:16.806 10588.786 - 10646.023: 13.9635% ( 83) 00:10:16.806 10646.023 - 10703.259: 14.8217% ( 78) 00:10:16.806 10703.259 - 10760.496: 15.6800% ( 78) 00:10:16.806 10760.496 - 10817.733: 16.5053% ( 75) 00:10:16.806 10817.733 - 10874.969: 17.2975% ( 72) 00:10:16.806 10874.969 - 10932.206: 18.2108% ( 83) 00:10:16.806 10932.206 - 10989.443: 18.9811% ( 70) 00:10:16.806 10989.443 - 11046.679: 19.6963% ( 65) 00:10:16.806 11046.679 - 11103.916: 20.3895% ( 63) 00:10:16.806 11103.916 - 11161.153: 20.7526% ( 33) 00:10:16.806 11161.153 - 11218.390: 21.1488% ( 36) 00:10:16.806 11218.390 - 11275.626: 21.6659% ( 47) 00:10:16.806 11275.626 - 11332.863: 22.0951% ( 39) 00:10:16.806 11332.863 - 11390.100: 22.5792% ( 44) 00:10:16.806 11390.100 - 11447.336: 23.0194% ( 40) 00:10:16.806 11447.336 - 11504.573: 23.4815% ( 42) 00:10:16.806 11504.573 - 11561.810: 23.8116% ( 30) 00:10:16.806 11561.810 - 11619.046: 24.1967% ( 35) 00:10:16.806 11619.046 - 11676.283: 24.7689% ( 52) 00:10:16.806 11676.283 - 11733.520: 25.5172% ( 68) 00:10:16.806 11733.520 - 11790.756: 26.1994% ( 62) 00:10:16.806 11790.756 - 11847.993: 26.8596% ( 60) 00:10:16.806 11847.993 - 11905.230: 27.3107% ( 41) 00:10:16.806 11905.230 - 11962.466: 27.7509% ( 40) 00:10:16.806 11962.466 - 12019.703: 28.1360% ( 35) 00:10:16.806 12019.703 - 12076.940: 28.5761% ( 40) 00:10:16.806 12076.940 - 12134.176: 28.9613% ( 35) 00:10:16.806 12134.176 - 12191.413: 29.3354% ( 34) 00:10:16.806 12191.413 - 12248.650: 29.6105% ( 25) 00:10:16.806 12248.650 - 12305.886: 29.9516% ( 31) 00:10:16.806 12305.886 - 12363.123: 30.3587% ( 37) 00:10:16.806 12363.123 - 12420.360: 30.7548% ( 36) 00:10:16.806 12420.360 - 12477.597: 31.1180% ( 33) 00:10:16.806 12477.597 - 12534.833: 31.6021% ( 44) 00:10:16.806 12534.833 - 12592.070: 32.2183% ( 56) 00:10:16.806 12592.070 - 12649.307: 32.9005% ( 62) 00:10:16.806 12649.307 - 12706.543: 33.7038% ( 73) 00:10:16.806 12706.543 - 12763.780: 34.5511% ( 77) 00:10:16.806 12763.780 - 12821.017: 35.6404% ( 99) 00:10:16.806 12821.017 - 12878.253: 36.5537% ( 83) 00:10:16.806 12878.253 - 12935.490: 37.6540% ( 100) 00:10:16.806 12935.490 - 12992.727: 38.5233% ( 79) 00:10:16.806 12992.727 - 13049.963: 39.5246% ( 91) 00:10:16.806 13049.963 - 13107.200: 40.2949% ( 70) 00:10:16.806 13107.200 - 13164.437: 41.0982% ( 73) 00:10:16.806 13164.437 - 13221.673: 41.7254% ( 57) 00:10:16.806 13221.673 - 13278.910: 42.3195% ( 54) 00:10:16.806 13278.910 - 13336.147: 42.8257% ( 46) 00:10:16.806 13336.147 - 13393.383: 43.3539% ( 48) 00:10:16.806 13393.383 - 13450.620: 43.8380% ( 44) 00:10:16.806 13450.620 - 13507.857: 44.2011% ( 33) 00:10:16.806 13507.857 - 13565.093: 44.4762% ( 25) 00:10:16.806 13565.093 - 13622.330: 44.6963% ( 20) 00:10:16.806 13622.330 - 13679.567: 44.9714% ( 25) 00:10:16.806 13679.567 - 13736.803: 45.3565% ( 35) 00:10:16.806 13736.803 - 13794.040: 45.9287% ( 52) 00:10:16.806 13794.040 - 13851.277: 46.5669% ( 58) 00:10:16.806 13851.277 - 13908.514: 47.5352% ( 88) 00:10:16.806 13908.514 - 13965.750: 48.3715% ( 76) 00:10:16.806 13965.750 - 14022.987: 49.2077% ( 76) 00:10:16.806 14022.987 - 14080.224: 50.3081% ( 100) 00:10:16.806 14080.224 - 14137.460: 51.5955% ( 117) 00:10:16.806 14137.460 - 14194.697: 53.0700% ( 134) 00:10:16.806 14194.697 - 14251.934: 54.4784% ( 128) 00:10:16.806 14251.934 - 14309.170: 55.8979% ( 129) 00:10:16.806 14309.170 - 14366.407: 57.2403% ( 122) 00:10:16.806 14366.407 - 14423.644: 58.5277% ( 117) 00:10:16.806 14423.644 - 14480.880: 59.6061% ( 98) 00:10:16.806 14480.880 - 14538.117: 60.9265% ( 120) 00:10:16.806 14538.117 - 14595.354: 62.0158% ( 99) 00:10:16.806 14595.354 - 14652.590: 62.9291% ( 83) 00:10:16.806 14652.590 - 14767.064: 64.7337% ( 164) 00:10:16.806 14767.064 - 14881.537: 66.1092% ( 125) 00:10:16.806 14881.537 - 14996.010: 67.2095% ( 100) 00:10:16.806 14996.010 - 15110.484: 68.4199% ( 110) 00:10:16.806 15110.484 - 15224.957: 69.4872% ( 97) 00:10:16.806 15224.957 - 15339.431: 70.4445% ( 87) 00:10:16.806 15339.431 - 15453.904: 71.2808% ( 76) 00:10:16.806 15453.904 - 15568.377: 72.2931% ( 92) 00:10:16.806 15568.377 - 15682.851: 73.3605% ( 97) 00:10:16.806 15682.851 - 15797.324: 74.3838% ( 93) 00:10:16.806 15797.324 - 15911.797: 75.3301% ( 86) 00:10:16.806 15911.797 - 16026.271: 76.4195% ( 99) 00:10:16.806 16026.271 - 16140.744: 77.5418% ( 102) 00:10:16.806 16140.744 - 16255.217: 78.7632% ( 111) 00:10:16.806 16255.217 - 16369.691: 80.2157% ( 132) 00:10:16.806 16369.691 - 16484.164: 81.5141% ( 118) 00:10:16.806 16484.164 - 16598.638: 82.4054% ( 81) 00:10:16.806 16598.638 - 16713.111: 83.2857% ( 80) 00:10:16.806 16713.111 - 16827.584: 84.4520% ( 106) 00:10:16.806 16827.584 - 16942.058: 85.5304% ( 98) 00:10:16.806 16942.058 - 17056.531: 86.4107% ( 80) 00:10:16.806 17056.531 - 17171.004: 86.9608% ( 50) 00:10:16.806 17171.004 - 17285.478: 87.4890% ( 48) 00:10:16.806 17285.478 - 17399.951: 88.0062% ( 47) 00:10:16.806 17399.951 - 17514.424: 88.5123% ( 46) 00:10:16.806 17514.424 - 17628.898: 89.0845% ( 52) 00:10:16.806 17628.898 - 17743.371: 89.7557% ( 61) 00:10:16.806 17743.371 - 17857.845: 90.3939% ( 58) 00:10:16.806 17857.845 - 17972.318: 90.8451% ( 41) 00:10:16.806 17972.318 - 18086.791: 91.4613% ( 56) 00:10:16.806 18086.791 - 18201.265: 92.1655% ( 64) 00:10:16.806 18201.265 - 18315.738: 92.5396% ( 34) 00:10:16.806 18315.738 - 18430.211: 92.8917% ( 32) 00:10:16.806 18430.211 - 18544.685: 93.3979% ( 46) 00:10:16.806 18544.685 - 18659.158: 93.7170% ( 29) 00:10:16.806 18659.158 - 18773.631: 93.9591% ( 22) 00:10:16.806 18773.631 - 18888.105: 94.2011% ( 22) 00:10:16.806 18888.105 - 19002.578: 94.4212% ( 20) 00:10:16.806 19002.578 - 19117.052: 94.7953% ( 34) 00:10:16.806 19117.052 - 19231.525: 95.3455% ( 50) 00:10:16.806 19231.525 - 19345.998: 95.6646% ( 29) 00:10:16.806 19345.998 - 19460.472: 95.8517% ( 17) 00:10:16.806 19460.472 - 19574.945: 95.9727% ( 11) 00:10:16.806 19574.945 - 19689.418: 96.1158% ( 13) 00:10:16.806 19689.418 - 19803.892: 96.3138% ( 18) 00:10:16.806 19803.892 - 19918.365: 96.4459% ( 12) 00:10:16.806 19918.365 - 20032.838: 96.5669% ( 11) 00:10:16.806 20032.838 - 20147.312: 96.6989% ( 12) 00:10:16.806 20147.312 - 20261.785: 96.8200% ( 11) 00:10:16.806 20261.785 - 20376.259: 96.9630% ( 13) 00:10:16.806 20376.259 - 20490.732: 97.2381% ( 25) 00:10:16.806 20490.732 - 20605.205: 97.4692% ( 21) 00:10:16.806 20605.205 - 20719.679: 97.6452% ( 16) 00:10:16.806 20719.679 - 20834.152: 97.6783% ( 3) 00:10:16.806 20834.152 - 20948.625: 97.7003% ( 2) 00:10:16.806 20948.625 - 21063.099: 97.7333% ( 3) 00:10:16.806 21063.099 - 21177.572: 97.7663% ( 3) 00:10:16.806 21177.572 - 21292.045: 97.7993% ( 3) 00:10:16.806 21292.045 - 21406.519: 97.8213% ( 2) 00:10:16.806 21406.519 - 21520.992: 97.8543% ( 3) 00:10:16.806 21520.992 - 21635.466: 97.8763% ( 2) 00:10:16.806 21635.466 - 21749.939: 97.8873% ( 1) 00:10:16.806 21978.886 - 22093.359: 97.8983% ( 1) 00:10:16.806 22093.359 - 22207.832: 97.9533% ( 5) 00:10:16.806 22207.832 - 22322.306: 98.0194% ( 6) 00:10:16.806 22322.306 - 22436.779: 98.0744% ( 5) 00:10:16.806 22436.779 - 22551.252: 98.2724% ( 18) 00:10:16.806 22551.252 - 22665.726: 98.3935% ( 11) 00:10:16.806 22665.726 - 22780.199: 98.5475% ( 14) 00:10:16.806 22780.199 - 22894.672: 98.5805% ( 3) 00:10:16.806 22894.672 - 23009.146: 98.5915% ( 1) 00:10:16.806 28274.921 - 28389.394: 98.6136% ( 2) 00:10:16.806 28389.394 - 28503.867: 98.6796% ( 6) 00:10:16.806 28503.867 - 28618.341: 98.7566% ( 7) 00:10:16.806 28618.341 - 28732.814: 98.8336% ( 7) 00:10:16.806 28732.814 - 28847.287: 98.9107% ( 7) 00:10:16.806 28847.287 - 28961.761: 98.9877% ( 7) 00:10:16.806 28961.761 - 29076.234: 99.0647% ( 7) 00:10:16.806 29076.234 - 29190.707: 99.1417% ( 7) 00:10:16.806 29190.707 - 29305.181: 99.2188% ( 7) 00:10:16.806 29305.181 - 29534.128: 99.2958% ( 7) 00:10:16.806 35944.636 - 36173.583: 99.3398% ( 4) 00:10:16.806 36173.583 - 36402.529: 99.4828% ( 13) 00:10:16.806 36402.529 - 36631.476: 99.6259% ( 13) 00:10:16.806 36631.476 - 36860.423: 99.7689% ( 13) 00:10:16.806 36860.423 - 37089.369: 99.9230% ( 14) 00:10:16.806 37089.369 - 37318.316: 100.0000% ( 7) 00:10:16.806 00:10:16.806 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:10:16.806 ============================================================================== 00:10:16.806 Range in us Cumulative IO count 00:10:16.806 7841.425 - 7898.662: 0.0110% ( 1) 00:10:16.806 7955.899 - 8013.135: 0.0220% ( 1) 00:10:16.806 8184.845 - 8242.082: 0.0440% ( 2) 00:10:16.806 8242.082 - 8299.319: 0.1320% ( 8) 00:10:16.806 8299.319 - 8356.555: 0.2751% ( 13) 00:10:16.806 8356.555 - 8413.792: 0.4071% ( 12) 00:10:16.806 8413.792 - 8471.029: 0.4952% ( 8) 00:10:16.806 8471.029 - 8528.266: 0.5282% ( 3) 00:10:16.806 8528.266 - 8585.502: 0.5502% ( 2) 00:10:16.806 8585.502 - 8642.739: 0.5722% ( 2) 00:10:16.806 8642.739 - 8699.976: 0.5942% ( 2) 00:10:16.806 8699.976 - 8757.212: 0.6272% ( 3) 00:10:16.806 8757.212 - 8814.449: 0.6492% ( 2) 00:10:16.806 8814.449 - 8871.686: 0.6712% ( 2) 00:10:16.807 8871.686 - 8928.922: 0.7042% ( 3) 00:10:16.807 9215.106 - 9272.342: 0.7152% ( 1) 00:10:16.807 9272.342 - 9329.579: 0.7372% ( 2) 00:10:16.807 9329.579 - 9386.816: 0.7923% ( 5) 00:10:16.807 9386.816 - 9444.052: 0.9023% ( 10) 00:10:16.807 9444.052 - 9501.289: 1.0453% ( 13) 00:10:16.807 9501.289 - 9558.526: 1.3864% ( 31) 00:10:16.807 9558.526 - 9615.762: 1.6615% ( 25) 00:10:16.807 9615.762 - 9672.999: 2.1127% ( 41) 00:10:16.807 9672.999 - 9730.236: 2.5088% ( 36) 00:10:16.807 9730.236 - 9787.472: 2.8279% ( 29) 00:10:16.807 9787.472 - 9844.709: 3.1800% ( 32) 00:10:16.807 9844.709 - 9901.946: 3.5211% ( 31) 00:10:16.807 9901.946 - 9959.183: 3.9173% ( 36) 00:10:16.807 9959.183 - 10016.419: 4.3574% ( 40) 00:10:16.807 10016.419 - 10073.656: 5.0066% ( 59) 00:10:16.807 10073.656 - 10130.893: 5.6448% ( 58) 00:10:16.807 10130.893 - 10188.129: 6.1730% ( 48) 00:10:16.807 10188.129 - 10245.366: 7.0092% ( 76) 00:10:16.807 10245.366 - 10302.603: 7.7685% ( 69) 00:10:16.807 10302.603 - 10359.839: 8.4067% ( 58) 00:10:16.807 10359.839 - 10417.076: 9.3310% ( 84) 00:10:16.807 10417.076 - 10474.313: 9.9692% ( 58) 00:10:16.807 10474.313 - 10531.549: 10.8385% ( 79) 00:10:16.807 10531.549 - 10588.786: 11.6857% ( 77) 00:10:16.807 10588.786 - 10646.023: 12.7201% ( 94) 00:10:16.807 10646.023 - 10703.259: 13.8534% ( 103) 00:10:16.807 10703.259 - 10760.496: 14.7887% ( 85) 00:10:16.807 10760.496 - 10817.733: 15.6690% ( 80) 00:10:16.807 10817.733 - 10874.969: 16.4173% ( 68) 00:10:16.807 10874.969 - 10932.206: 17.0445% ( 57) 00:10:16.807 10932.206 - 10989.443: 17.7377% ( 63) 00:10:16.807 10989.443 - 11046.679: 18.2438% ( 46) 00:10:16.807 11046.679 - 11103.916: 18.7720% ( 48) 00:10:16.807 11103.916 - 11161.153: 19.2452% ( 43) 00:10:16.807 11161.153 - 11218.390: 19.7293% ( 44) 00:10:16.807 11218.390 - 11275.626: 20.2245% ( 45) 00:10:16.807 11275.626 - 11332.863: 20.8407% ( 56) 00:10:16.807 11332.863 - 11390.100: 21.3798% ( 49) 00:10:16.807 11390.100 - 11447.336: 22.0070% ( 57) 00:10:16.807 11447.336 - 11504.573: 22.7003% ( 63) 00:10:16.807 11504.573 - 11561.810: 23.4265% ( 66) 00:10:16.807 11561.810 - 11619.046: 23.9987% ( 52) 00:10:16.807 11619.046 - 11676.283: 24.5489% ( 50) 00:10:16.807 11676.283 - 11733.520: 24.9450% ( 36) 00:10:16.807 11733.520 - 11790.756: 25.3961% ( 41) 00:10:16.807 11790.756 - 11847.993: 25.8693% ( 43) 00:10:16.807 11847.993 - 11905.230: 26.2434% ( 34) 00:10:16.807 11905.230 - 11962.466: 26.6725% ( 39) 00:10:16.807 11962.466 - 12019.703: 27.0797% ( 37) 00:10:16.807 12019.703 - 12076.940: 27.4318% ( 32) 00:10:16.807 12076.940 - 12134.176: 27.9049% ( 43) 00:10:16.807 12134.176 - 12191.413: 28.3121% ( 37) 00:10:16.807 12191.413 - 12248.650: 28.8182% ( 46) 00:10:16.807 12248.650 - 12305.886: 29.3464% ( 48) 00:10:16.807 12305.886 - 12363.123: 30.0946% ( 68) 00:10:16.807 12363.123 - 12420.360: 30.7438% ( 59) 00:10:16.807 12420.360 - 12477.597: 31.3600% ( 56) 00:10:16.807 12477.597 - 12534.833: 32.0423% ( 62) 00:10:16.807 12534.833 - 12592.070: 32.7465% ( 64) 00:10:16.807 12592.070 - 12649.307: 33.4947% ( 68) 00:10:16.807 12649.307 - 12706.543: 34.2430% ( 68) 00:10:16.807 12706.543 - 12763.780: 34.8261% ( 53) 00:10:16.807 12763.780 - 12821.017: 35.4974% ( 61) 00:10:16.807 12821.017 - 12878.253: 36.1796% ( 62) 00:10:16.807 12878.253 - 12935.490: 36.9608% ( 71) 00:10:16.807 12935.490 - 12992.727: 37.6540% ( 63) 00:10:16.807 12992.727 - 13049.963: 38.4683% ( 74) 00:10:16.807 13049.963 - 13107.200: 39.1505% ( 62) 00:10:16.807 13107.200 - 13164.437: 39.9318% ( 71) 00:10:16.807 13164.437 - 13221.673: 40.5370% ( 55) 00:10:16.807 13221.673 - 13278.910: 41.0871% ( 50) 00:10:16.807 13278.910 - 13336.147: 41.7143% ( 57) 00:10:16.807 13336.147 - 13393.383: 42.2865% ( 52) 00:10:16.807 13393.383 - 13450.620: 42.7597% ( 43) 00:10:16.807 13450.620 - 13507.857: 43.1888% ( 39) 00:10:16.807 13507.857 - 13565.093: 43.7060% ( 47) 00:10:16.807 13565.093 - 13622.330: 44.3332% ( 57) 00:10:16.807 13622.330 - 13679.567: 44.7293% ( 36) 00:10:16.807 13679.567 - 13736.803: 45.1034% ( 34) 00:10:16.807 13736.803 - 13794.040: 45.6316% ( 48) 00:10:16.807 13794.040 - 13851.277: 46.4018% ( 70) 00:10:16.807 13851.277 - 13908.514: 47.2161% ( 74) 00:10:16.807 13908.514 - 13965.750: 47.9313% ( 65) 00:10:16.807 13965.750 - 14022.987: 48.9437% ( 92) 00:10:16.807 14022.987 - 14080.224: 50.2861% ( 122) 00:10:16.807 14080.224 - 14137.460: 51.8156% ( 139) 00:10:16.807 14137.460 - 14194.697: 53.1030% ( 117) 00:10:16.807 14194.697 - 14251.934: 54.3134% ( 110) 00:10:16.807 14251.934 - 14309.170: 55.4577% ( 104) 00:10:16.807 14309.170 - 14366.407: 56.9762% ( 138) 00:10:16.807 14366.407 - 14423.644: 57.9225% ( 86) 00:10:16.807 14423.644 - 14480.880: 58.7478% ( 75) 00:10:16.807 14480.880 - 14538.117: 59.6831% ( 85) 00:10:16.807 14538.117 - 14595.354: 60.6514% ( 88) 00:10:16.807 14595.354 - 14652.590: 61.5427% ( 81) 00:10:16.807 14652.590 - 14767.064: 63.6664% ( 193) 00:10:16.807 14767.064 - 14881.537: 65.1629% ( 136) 00:10:16.807 14881.537 - 14996.010: 66.8024% ( 149) 00:10:16.807 14996.010 - 15110.484: 68.7390% ( 176) 00:10:16.807 15110.484 - 15224.957: 69.9934% ( 114) 00:10:16.807 15224.957 - 15339.431: 71.0387% ( 95) 00:10:16.807 15339.431 - 15453.904: 72.0841% ( 95) 00:10:16.807 15453.904 - 15568.377: 73.0304% ( 86) 00:10:16.807 15568.377 - 15682.851: 73.8226% ( 72) 00:10:16.807 15682.851 - 15797.324: 74.6919% ( 79) 00:10:16.807 15797.324 - 15911.797: 75.8803% ( 108) 00:10:16.807 15911.797 - 16026.271: 77.1017% ( 111) 00:10:16.807 16026.271 - 16140.744: 78.4991% ( 127) 00:10:16.807 16140.744 - 16255.217: 80.2157% ( 156) 00:10:16.807 16255.217 - 16369.691: 81.8992% ( 153) 00:10:16.807 16369.691 - 16484.164: 82.9996% ( 100) 00:10:16.807 16484.164 - 16598.638: 83.8578% ( 78) 00:10:16.807 16598.638 - 16713.111: 84.6171% ( 69) 00:10:16.807 16713.111 - 16827.584: 85.4974% ( 80) 00:10:16.807 16827.584 - 16942.058: 86.4107% ( 83) 00:10:16.807 16942.058 - 17056.531: 87.1039% ( 63) 00:10:16.807 17056.531 - 17171.004: 87.8081% ( 64) 00:10:16.807 17171.004 - 17285.478: 88.4463% ( 58) 00:10:16.807 17285.478 - 17399.951: 89.1615% ( 65) 00:10:16.807 17399.951 - 17514.424: 89.8107% ( 59) 00:10:16.807 17514.424 - 17628.898: 90.3609% ( 50) 00:10:16.807 17628.898 - 17743.371: 90.7460% ( 35) 00:10:16.807 17743.371 - 17857.845: 91.0871% ( 31) 00:10:16.807 17857.845 - 17972.318: 91.4062% ( 29) 00:10:16.807 17972.318 - 18086.791: 91.6923% ( 26) 00:10:16.807 18086.791 - 18201.265: 92.2535% ( 51) 00:10:16.807 18201.265 - 18315.738: 92.5616% ( 28) 00:10:16.807 18315.738 - 18430.211: 92.6937% ( 12) 00:10:16.807 18430.211 - 18544.685: 92.9467% ( 23) 00:10:16.807 18544.685 - 18659.158: 93.1558% ( 19) 00:10:16.807 18659.158 - 18773.631: 93.4749% ( 29) 00:10:16.807 18773.631 - 18888.105: 93.7610% ( 26) 00:10:16.807 18888.105 - 19002.578: 93.9921% ( 21) 00:10:16.807 19002.578 - 19117.052: 94.1901% ( 18) 00:10:16.807 19117.052 - 19231.525: 94.5202% ( 30) 00:10:16.807 19231.525 - 19345.998: 94.7623% ( 22) 00:10:16.807 19345.998 - 19460.472: 95.0044% ( 22) 00:10:16.807 19460.472 - 19574.945: 95.3345% ( 30) 00:10:16.807 19574.945 - 19689.418: 95.8627% ( 48) 00:10:16.807 19689.418 - 19803.892: 96.5229% ( 60) 00:10:16.807 19803.892 - 19918.365: 96.8090% ( 26) 00:10:16.807 19918.365 - 20032.838: 97.0841% ( 25) 00:10:16.807 20032.838 - 20147.312: 97.4142% ( 30) 00:10:16.807 20147.312 - 20261.785: 97.7443% ( 30) 00:10:16.807 20261.785 - 20376.259: 97.9533% ( 19) 00:10:16.807 20376.259 - 20490.732: 98.0964% ( 13) 00:10:16.807 20490.732 - 20605.205: 98.2064% ( 10) 00:10:16.807 20605.205 - 20719.679: 98.3055% ( 9) 00:10:16.807 20719.679 - 20834.152: 98.3715% ( 6) 00:10:16.807 20834.152 - 20948.625: 98.4265% ( 5) 00:10:16.807 20948.625 - 21063.099: 98.4705% ( 4) 00:10:16.807 21063.099 - 21177.572: 98.5255% ( 5) 00:10:16.807 21177.572 - 21292.045: 98.5585% ( 3) 00:10:16.807 21292.045 - 21406.519: 98.5915% ( 3) 00:10:16.807 28389.394 - 28503.867: 98.6026% ( 1) 00:10:16.807 28503.867 - 28618.341: 98.6796% ( 7) 00:10:16.807 28618.341 - 28732.814: 98.7566% ( 7) 00:10:16.807 28732.814 - 28847.287: 98.8446% ( 8) 00:10:16.807 28847.287 - 28961.761: 98.9217% ( 7) 00:10:16.807 28961.761 - 29076.234: 98.9877% ( 6) 00:10:16.807 29076.234 - 29190.707: 99.0647% ( 7) 00:10:16.807 29190.707 - 29305.181: 99.1417% ( 7) 00:10:16.807 29305.181 - 29534.128: 99.2958% ( 14) 00:10:16.807 36173.583 - 36402.529: 99.3948% ( 9) 00:10:16.807 36402.529 - 36631.476: 99.5379% ( 13) 00:10:16.807 36631.476 - 36860.423: 99.6809% ( 13) 00:10:16.807 36860.423 - 37089.369: 99.8349% ( 14) 00:10:16.807 37089.369 - 37318.316: 99.9780% ( 13) 00:10:16.807 37318.316 - 37547.263: 100.0000% ( 2) 00:10:16.807 00:10:16.807 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:10:16.807 ============================================================================== 00:10:16.808 Range in us Cumulative IO count 00:10:16.808 7326.295 - 7383.532: 0.0109% ( 1) 00:10:16.808 7669.715 - 7726.952: 0.0656% ( 5) 00:10:16.808 7726.952 - 7784.189: 0.1093% ( 4) 00:10:16.808 7784.189 - 7841.425: 0.2076% ( 9) 00:10:16.808 7841.425 - 7898.662: 0.3606% ( 14) 00:10:16.808 7898.662 - 7955.899: 0.4698% ( 10) 00:10:16.808 7955.899 - 8013.135: 0.4917% ( 2) 00:10:16.808 8013.135 - 8070.372: 0.5245% ( 3) 00:10:16.808 8070.372 - 8127.609: 0.5463% ( 2) 00:10:16.808 8127.609 - 8184.845: 0.5791% ( 3) 00:10:16.808 8184.845 - 8242.082: 0.5900% ( 1) 00:10:16.808 8242.082 - 8299.319: 0.6228% ( 3) 00:10:16.808 8299.319 - 8356.555: 0.6447% ( 2) 00:10:16.808 8356.555 - 8413.792: 0.6665% ( 2) 00:10:16.808 8413.792 - 8471.029: 0.6884% ( 2) 00:10:16.808 8471.029 - 8528.266: 0.6993% ( 1) 00:10:16.808 9386.816 - 9444.052: 0.7212% ( 2) 00:10:16.808 9444.052 - 9501.289: 0.8086% ( 8) 00:10:16.808 9501.289 - 9558.526: 0.9069% ( 9) 00:10:16.808 9558.526 - 9615.762: 1.0271% ( 11) 00:10:16.808 9615.762 - 9672.999: 1.2784% ( 23) 00:10:16.808 9672.999 - 9730.236: 1.4205% ( 13) 00:10:16.808 9730.236 - 9787.472: 1.7264% ( 28) 00:10:16.808 9787.472 - 9844.709: 2.0651% ( 31) 00:10:16.808 9844.709 - 9901.946: 2.6661% ( 55) 00:10:16.808 9901.946 - 9959.183: 3.2889% ( 57) 00:10:16.808 9959.183 - 10016.419: 4.0865% ( 73) 00:10:16.808 10016.419 - 10073.656: 4.8077% ( 66) 00:10:16.808 10073.656 - 10130.893: 5.7583% ( 87) 00:10:16.808 10130.893 - 10188.129: 6.5778% ( 75) 00:10:16.808 10188.129 - 10245.366: 7.5393% ( 88) 00:10:16.808 10245.366 - 10302.603: 8.3588% ( 75) 00:10:16.808 10302.603 - 10359.839: 9.2330% ( 80) 00:10:16.808 10359.839 - 10417.076: 9.9323% ( 64) 00:10:16.808 10417.076 - 10474.313: 10.5114% ( 53) 00:10:16.808 10474.313 - 10531.549: 11.2107% ( 64) 00:10:16.808 10531.549 - 10588.786: 11.8007% ( 54) 00:10:16.808 10588.786 - 10646.023: 12.6858% ( 81) 00:10:16.808 10646.023 - 10703.259: 13.6691% ( 90) 00:10:16.808 10703.259 - 10760.496: 14.2045% ( 49) 00:10:16.808 10760.496 - 10817.733: 14.9476% ( 68) 00:10:16.808 10817.733 - 10874.969: 15.8763% ( 85) 00:10:16.808 10874.969 - 10932.206: 16.6958% ( 75) 00:10:16.808 10932.206 - 10989.443: 17.6136% ( 84) 00:10:16.808 10989.443 - 11046.679: 18.5752% ( 88) 00:10:16.808 11046.679 - 11103.916: 19.3510% ( 71) 00:10:16.808 11103.916 - 11161.153: 20.1267% ( 71) 00:10:16.808 11161.153 - 11218.390: 20.6949% ( 52) 00:10:16.808 11218.390 - 11275.626: 21.2522% ( 51) 00:10:16.808 11275.626 - 11332.863: 21.7548% ( 46) 00:10:16.808 11332.863 - 11390.100: 22.1591% ( 37) 00:10:16.808 11390.100 - 11447.336: 22.4104% ( 23) 00:10:16.808 11447.336 - 11504.573: 22.8256% ( 38) 00:10:16.808 11504.573 - 11561.810: 23.1862% ( 33) 00:10:16.808 11561.810 - 11619.046: 23.4156% ( 21) 00:10:16.808 11619.046 - 11676.283: 23.7325% ( 29) 00:10:16.808 11676.283 - 11733.520: 24.0931% ( 33) 00:10:16.808 11733.520 - 11790.756: 24.5848% ( 45) 00:10:16.808 11790.756 - 11847.993: 24.9781% ( 36) 00:10:16.808 11847.993 - 11905.230: 25.4261% ( 41) 00:10:16.808 11905.230 - 11962.466: 25.8851% ( 42) 00:10:16.808 11962.466 - 12019.703: 26.5079% ( 57) 00:10:16.808 12019.703 - 12076.940: 27.1635% ( 60) 00:10:16.808 12076.940 - 12134.176: 27.9939% ( 76) 00:10:16.808 12134.176 - 12191.413: 28.5184% ( 48) 00:10:16.808 12191.413 - 12248.650: 29.0319% ( 47) 00:10:16.808 12248.650 - 12305.886: 29.5673% ( 49) 00:10:16.808 12305.886 - 12363.123: 30.5070% ( 86) 00:10:16.808 12363.123 - 12420.360: 31.1189% ( 56) 00:10:16.808 12420.360 - 12477.597: 32.0149% ( 82) 00:10:16.808 12477.597 - 12534.833: 32.6814% ( 61) 00:10:16.808 12534.833 - 12592.070: 33.5664% ( 81) 00:10:16.808 12592.070 - 12649.307: 34.3969% ( 76) 00:10:16.808 12649.307 - 12706.543: 35.1945% ( 73) 00:10:16.808 12706.543 - 12763.780: 35.9266% ( 67) 00:10:16.808 12763.780 - 12821.017: 36.5603% ( 58) 00:10:16.808 12821.017 - 12878.253: 37.2378% ( 62) 00:10:16.808 12878.253 - 12935.490: 37.9043% ( 61) 00:10:16.808 12935.490 - 12992.727: 38.5052% ( 55) 00:10:16.808 12992.727 - 13049.963: 38.9423% ( 40) 00:10:16.808 13049.963 - 13107.200: 39.4012% ( 42) 00:10:16.808 13107.200 - 13164.437: 39.9257% ( 48) 00:10:16.808 13164.437 - 13221.673: 40.4611% ( 49) 00:10:16.808 13221.673 - 13278.910: 41.0948% ( 58) 00:10:16.808 13278.910 - 13336.147: 41.6849% ( 54) 00:10:16.808 13336.147 - 13393.383: 42.3842% ( 64) 00:10:16.808 13393.383 - 13450.620: 43.1927% ( 74) 00:10:16.808 13450.620 - 13507.857: 43.9795% ( 72) 00:10:16.808 13507.857 - 13565.093: 44.6678% ( 63) 00:10:16.808 13565.093 - 13622.330: 45.3453% ( 62) 00:10:16.808 13622.330 - 13679.567: 46.0774% ( 67) 00:10:16.808 13679.567 - 13736.803: 46.7876% ( 65) 00:10:16.808 13736.803 - 13794.040: 47.5415% ( 69) 00:10:16.808 13794.040 - 13851.277: 48.1534% ( 56) 00:10:16.808 13851.277 - 13908.514: 48.8964% ( 68) 00:10:16.808 13908.514 - 13965.750: 49.6394% ( 68) 00:10:16.808 13965.750 - 14022.987: 50.3606% ( 66) 00:10:16.808 14022.987 - 14080.224: 51.2456% ( 81) 00:10:16.808 14080.224 - 14137.460: 51.9231% ( 62) 00:10:16.808 14137.460 - 14194.697: 52.4803% ( 51) 00:10:16.808 14194.697 - 14251.934: 53.1796% ( 64) 00:10:16.808 14251.934 - 14309.170: 53.8571% ( 62) 00:10:16.808 14309.170 - 14366.407: 54.5673% ( 65) 00:10:16.808 14366.407 - 14423.644: 55.5616% ( 91) 00:10:16.808 14423.644 - 14480.880: 56.6324% ( 98) 00:10:16.808 14480.880 - 14538.117: 57.5940% ( 88) 00:10:16.808 14538.117 - 14595.354: 58.6211% ( 94) 00:10:16.808 14595.354 - 14652.590: 60.0197% ( 128) 00:10:16.808 14652.590 - 14767.064: 62.4781% ( 225) 00:10:16.808 14767.064 - 14881.537: 64.4231% ( 178) 00:10:16.808 14881.537 - 14996.010: 65.9091% ( 136) 00:10:16.808 14996.010 - 15110.484: 67.3295% ( 130) 00:10:16.808 15110.484 - 15224.957: 68.7391% ( 129) 00:10:16.808 15224.957 - 15339.431: 69.9191% ( 108) 00:10:16.808 15339.431 - 15453.904: 71.1757% ( 115) 00:10:16.808 15453.904 - 15568.377: 72.3995% ( 112) 00:10:16.808 15568.377 - 15682.851: 74.1368% ( 159) 00:10:16.808 15682.851 - 15797.324: 75.7758% ( 150) 00:10:16.808 15797.324 - 15911.797: 77.2727% ( 137) 00:10:16.808 15911.797 - 16026.271: 78.7150% ( 132) 00:10:16.808 16026.271 - 16140.744: 80.0481% ( 122) 00:10:16.808 16140.744 - 16255.217: 81.0861% ( 95) 00:10:16.808 16255.217 - 16369.691: 82.1678% ( 99) 00:10:16.808 16369.691 - 16484.164: 83.3370% ( 107) 00:10:16.808 16484.164 - 16598.638: 84.2876% ( 87) 00:10:16.808 16598.638 - 16713.111: 85.1071% ( 75) 00:10:16.808 16713.111 - 16827.584: 85.6971% ( 54) 00:10:16.808 16827.584 - 16942.058: 86.3418% ( 59) 00:10:16.808 16942.058 - 17056.531: 86.9427% ( 55) 00:10:16.808 17056.531 - 17171.004: 87.6202% ( 62) 00:10:16.808 17171.004 - 17285.478: 88.1228% ( 46) 00:10:16.808 17285.478 - 17399.951: 88.5162% ( 36) 00:10:16.808 17399.951 - 17514.424: 88.9642% ( 41) 00:10:16.808 17514.424 - 17628.898: 89.4449% ( 44) 00:10:16.808 17628.898 - 17743.371: 90.1661% ( 66) 00:10:16.808 17743.371 - 17857.845: 90.8982% ( 67) 00:10:16.808 17857.845 - 17972.318: 91.4882% ( 54) 00:10:16.808 17972.318 - 18086.791: 91.7832% ( 27) 00:10:16.808 18086.791 - 18201.265: 92.0673% ( 26) 00:10:16.808 18201.265 - 18315.738: 92.3842% ( 29) 00:10:16.808 18315.738 - 18430.211: 92.6136% ( 21) 00:10:16.808 18430.211 - 18544.685: 92.8540% ( 22) 00:10:16.808 18544.685 - 18659.158: 93.5642% ( 65) 00:10:16.808 18659.158 - 18773.631: 93.8046% ( 22) 00:10:16.808 18773.631 - 18888.105: 94.0232% ( 20) 00:10:16.808 18888.105 - 19002.578: 94.2417% ( 20) 00:10:16.808 19002.578 - 19117.052: 94.4165% ( 16) 00:10:16.808 19117.052 - 19231.525: 94.5586% ( 13) 00:10:16.808 19231.525 - 19345.998: 94.9628% ( 37) 00:10:16.808 19345.998 - 19460.472: 95.2797% ( 29) 00:10:16.808 19460.472 - 19574.945: 95.6622% ( 35) 00:10:16.808 19574.945 - 19689.418: 96.0992% ( 40) 00:10:16.808 19689.418 - 19803.892: 96.5472% ( 41) 00:10:16.808 19803.892 - 19918.365: 96.8531% ( 28) 00:10:16.808 19918.365 - 20032.838: 97.0717% ( 20) 00:10:16.808 20032.838 - 20147.312: 97.2574% ( 17) 00:10:16.808 20147.312 - 20261.785: 97.4213% ( 15) 00:10:16.808 20261.785 - 20376.259: 97.6617% ( 22) 00:10:16.808 20376.259 - 20490.732: 97.9349% ( 25) 00:10:16.808 20490.732 - 20605.205: 98.0114% ( 7) 00:10:16.808 20605.205 - 20719.679: 98.1206% ( 10) 00:10:16.808 20719.679 - 20834.152: 98.2299% ( 10) 00:10:16.808 20834.152 - 20948.625: 98.3501% ( 11) 00:10:16.808 20948.625 - 21063.099: 98.4703% ( 11) 00:10:16.808 21063.099 - 21177.572: 98.6014% ( 12) 00:10:16.808 21177.572 - 21292.045: 98.7325% ( 12) 00:10:16.808 21292.045 - 21406.519: 98.8746% ( 13) 00:10:16.808 21406.519 - 21520.992: 99.0057% ( 12) 00:10:16.808 21520.992 - 21635.466: 99.1149% ( 10) 00:10:16.808 21635.466 - 21749.939: 99.1805% ( 6) 00:10:16.808 21749.939 - 21864.412: 99.2242% ( 4) 00:10:16.808 21864.412 - 21978.886: 99.2679% ( 4) 00:10:16.808 21978.886 - 22093.359: 99.3007% ( 3) 00:10:16.808 28847.287 - 28961.761: 99.3226% ( 2) 00:10:16.808 28961.761 - 29076.234: 99.3881% ( 6) 00:10:16.808 29076.234 - 29190.707: 99.4646% ( 7) 00:10:16.808 29190.707 - 29305.181: 99.5411% ( 7) 00:10:16.808 29305.181 - 29534.128: 99.6831% ( 13) 00:10:16.808 29534.128 - 29763.074: 99.8361% ( 14) 00:10:16.808 29763.074 - 29992.021: 99.9781% ( 13) 00:10:16.808 29992.021 - 30220.968: 100.0000% ( 2) 00:10:16.808 00:10:16.808 18:28:16 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:10:16.808 00:10:16.808 real 0m2.492s 00:10:16.808 user 0m2.182s 00:10:16.808 sys 0m0.207s 00:10:16.808 18:28:16 nvme.nvme_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:16.808 18:28:16 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:10:16.808 ************************************ 00:10:16.808 END TEST nvme_perf 00:10:16.808 ************************************ 00:10:16.808 18:28:16 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:10:16.808 18:28:16 nvme -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:10:16.808 18:28:16 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:16.808 18:28:16 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:16.808 ************************************ 00:10:16.808 START TEST nvme_hello_world 00:10:16.808 ************************************ 00:10:16.808 18:28:16 nvme.nvme_hello_world -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:10:16.808 Initializing NVMe Controllers 00:10:16.808 Attached to 0000:00:10.0 00:10:16.808 Namespace ID: 1 size: 6GB 00:10:16.808 Attached to 0000:00:11.0 00:10:16.808 Namespace ID: 1 size: 5GB 00:10:16.808 Attached to 0000:00:13.0 00:10:16.808 Namespace ID: 1 size: 1GB 00:10:16.808 Attached to 0000:00:12.0 00:10:16.808 Namespace ID: 1 size: 4GB 00:10:16.808 Namespace ID: 2 size: 4GB 00:10:16.808 Namespace ID: 3 size: 4GB 00:10:16.808 Initialization complete. 00:10:16.808 INFO: using host memory buffer for IO 00:10:16.809 Hello world! 00:10:16.809 INFO: using host memory buffer for IO 00:10:16.809 Hello world! 00:10:16.809 INFO: using host memory buffer for IO 00:10:16.809 Hello world! 00:10:16.809 INFO: using host memory buffer for IO 00:10:16.809 Hello world! 00:10:16.809 INFO: using host memory buffer for IO 00:10:16.809 Hello world! 00:10:16.809 INFO: using host memory buffer for IO 00:10:16.809 Hello world! 00:10:16.809 00:10:16.809 real 0m0.254s 00:10:16.809 user 0m0.077s 00:10:16.809 sys 0m0.127s 00:10:16.809 18:28:16 nvme.nvme_hello_world -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:16.809 ************************************ 00:10:16.809 END TEST nvme_hello_world 00:10:16.809 ************************************ 00:10:16.809 18:28:16 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:10:16.809 18:28:16 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:10:16.809 18:28:16 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:16.809 18:28:16 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:17.068 18:28:16 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:17.068 ************************************ 00:10:17.068 START TEST nvme_sgl 00:10:17.068 ************************************ 00:10:17.068 18:28:16 nvme.nvme_sgl -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:10:17.068 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:10:17.068 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:10:17.068 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:10:17.068 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:10:17.068 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:10:17.068 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:10:17.068 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:10:17.068 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:10:17.068 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:10:17.068 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:10:17.068 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:10:17.068 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:10:17.068 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:10:17.068 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:10:17.068 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:10:17.068 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:10:17.068 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:10:17.068 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:10:17.068 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:10:17.068 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:10:17.068 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:10:17.068 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:10:17.068 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:10:17.068 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:10:17.068 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:10:17.068 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:10:17.068 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:10:17.068 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:10:17.068 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:10:17.068 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:10:17.068 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:10:17.068 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:10:17.068 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:10:17.068 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:10:17.068 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:10:17.068 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:10:17.327 NVMe Readv/Writev Request test 00:10:17.327 Attached to 0000:00:10.0 00:10:17.327 Attached to 0000:00:11.0 00:10:17.327 Attached to 0000:00:13.0 00:10:17.327 Attached to 0000:00:12.0 00:10:17.327 0000:00:10.0: build_io_request_2 test passed 00:10:17.327 0000:00:10.0: build_io_request_4 test passed 00:10:17.327 0000:00:10.0: build_io_request_5 test passed 00:10:17.327 0000:00:10.0: build_io_request_6 test passed 00:10:17.327 0000:00:10.0: build_io_request_7 test passed 00:10:17.327 0000:00:10.0: build_io_request_10 test passed 00:10:17.327 0000:00:11.0: build_io_request_2 test passed 00:10:17.327 0000:00:11.0: build_io_request_4 test passed 00:10:17.327 0000:00:11.0: build_io_request_5 test passed 00:10:17.327 0000:00:11.0: build_io_request_6 test passed 00:10:17.327 0000:00:11.0: build_io_request_7 test passed 00:10:17.327 0000:00:11.0: build_io_request_10 test passed 00:10:17.327 Cleaning up... 00:10:17.327 00:10:17.327 real 0m0.282s 00:10:17.327 user 0m0.131s 00:10:17.327 sys 0m0.108s 00:10:17.327 18:28:17 nvme.nvme_sgl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:17.327 18:28:17 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:10:17.327 ************************************ 00:10:17.327 END TEST nvme_sgl 00:10:17.327 ************************************ 00:10:17.327 18:28:17 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:10:17.327 18:28:17 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:17.327 18:28:17 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:17.327 18:28:17 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:17.327 ************************************ 00:10:17.327 START TEST nvme_e2edp 00:10:17.327 ************************************ 00:10:17.327 18:28:17 nvme.nvme_e2edp -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:10:17.585 NVMe Write/Read with End-to-End data protection test 00:10:17.585 Attached to 0000:00:10.0 00:10:17.585 Attached to 0000:00:11.0 00:10:17.586 Attached to 0000:00:13.0 00:10:17.586 Attached to 0000:00:12.0 00:10:17.586 Cleaning up... 00:10:17.586 00:10:17.586 real 0m0.231s 00:10:17.586 user 0m0.071s 00:10:17.586 sys 0m0.114s 00:10:17.586 18:28:17 nvme.nvme_e2edp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:17.586 18:28:17 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:10:17.586 ************************************ 00:10:17.586 END TEST nvme_e2edp 00:10:17.586 ************************************ 00:10:17.586 18:28:17 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:10:17.586 18:28:17 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:17.586 18:28:17 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:17.586 18:28:17 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:17.586 ************************************ 00:10:17.586 START TEST nvme_reserve 00:10:17.586 ************************************ 00:10:17.586 18:28:17 nvme.nvme_reserve -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:10:17.845 ===================================================== 00:10:17.845 NVMe Controller at PCI bus 0, device 16, function 0 00:10:17.845 ===================================================== 00:10:17.845 Reservations: Not Supported 00:10:17.845 ===================================================== 00:10:17.845 NVMe Controller at PCI bus 0, device 17, function 0 00:10:17.845 ===================================================== 00:10:17.845 Reservations: Not Supported 00:10:17.845 ===================================================== 00:10:17.845 NVMe Controller at PCI bus 0, device 19, function 0 00:10:17.845 ===================================================== 00:10:17.845 Reservations: Not Supported 00:10:17.845 ===================================================== 00:10:17.845 NVMe Controller at PCI bus 0, device 18, function 0 00:10:17.845 ===================================================== 00:10:17.845 Reservations: Not Supported 00:10:17.845 Reservation test passed 00:10:17.845 00:10:17.845 real 0m0.230s 00:10:17.845 user 0m0.074s 00:10:17.845 sys 0m0.109s 00:10:17.845 18:28:17 nvme.nvme_reserve -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:17.845 18:28:17 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:10:17.845 ************************************ 00:10:17.845 END TEST nvme_reserve 00:10:17.845 ************************************ 00:10:17.845 18:28:17 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:10:17.845 18:28:17 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:17.845 18:28:17 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:17.845 18:28:17 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:17.845 ************************************ 00:10:17.845 START TEST nvme_err_injection 00:10:17.845 ************************************ 00:10:17.845 18:28:17 nvme.nvme_err_injection -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:10:18.104 NVMe Error Injection test 00:10:18.104 Attached to 0000:00:10.0 00:10:18.104 Attached to 0000:00:11.0 00:10:18.104 Attached to 0000:00:13.0 00:10:18.104 Attached to 0000:00:12.0 00:10:18.104 0000:00:10.0: get features failed as expected 00:10:18.104 0000:00:11.0: get features failed as expected 00:10:18.104 0000:00:13.0: get features failed as expected 00:10:18.104 0000:00:12.0: get features failed as expected 00:10:18.104 0000:00:10.0: get features successfully as expected 00:10:18.104 0000:00:11.0: get features successfully as expected 00:10:18.104 0000:00:13.0: get features successfully as expected 00:10:18.104 0000:00:12.0: get features successfully as expected 00:10:18.104 0000:00:11.0: read failed as expected 00:10:18.104 0000:00:10.0: read failed as expected 00:10:18.104 0000:00:13.0: read failed as expected 00:10:18.104 0000:00:12.0: read failed as expected 00:10:18.104 0000:00:10.0: read successfully as expected 00:10:18.104 0000:00:11.0: read successfully as expected 00:10:18.104 0000:00:13.0: read successfully as expected 00:10:18.104 0000:00:12.0: read successfully as expected 00:10:18.104 Cleaning up... 00:10:18.104 00:10:18.104 real 0m0.240s 00:10:18.104 user 0m0.088s 00:10:18.104 sys 0m0.107s 00:10:18.104 18:28:18 nvme.nvme_err_injection -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:18.104 18:28:18 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:10:18.104 ************************************ 00:10:18.104 END TEST nvme_err_injection 00:10:18.104 ************************************ 00:10:18.104 18:28:18 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:10:18.104 18:28:18 nvme -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:10:18.104 18:28:18 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:18.104 18:28:18 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:18.104 ************************************ 00:10:18.104 START TEST nvme_overhead 00:10:18.104 ************************************ 00:10:18.104 18:28:18 nvme.nvme_overhead -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:10:19.483 Initializing NVMe Controllers 00:10:19.483 Attached to 0000:00:10.0 00:10:19.483 Attached to 0000:00:11.0 00:10:19.483 Attached to 0000:00:13.0 00:10:19.483 Attached to 0000:00:12.0 00:10:19.483 Initialization complete. Launching workers. 00:10:19.483 submit (in ns) avg, min, max = 13676.1, 11313.5, 193014.0 00:10:19.483 complete (in ns) avg, min, max = 7879.0, 6316.2, 41533.6 00:10:19.483 00:10:19.483 Submit histogram 00:10:19.483 ================ 00:10:19.483 Range in us Cumulative Count 00:10:19.483 11.291 - 11.347: 0.0105% ( 1) 00:10:19.483 11.514 - 11.570: 0.0210% ( 1) 00:10:19.483 11.570 - 11.626: 0.0736% ( 5) 00:10:19.483 11.626 - 11.682: 0.1051% ( 3) 00:10:19.483 11.682 - 11.738: 0.1577% ( 5) 00:10:19.483 11.738 - 11.794: 0.2102% ( 5) 00:10:19.483 11.794 - 11.850: 0.2523% ( 4) 00:10:19.483 11.850 - 11.906: 0.3469% ( 9) 00:10:19.483 11.906 - 11.962: 0.5466% ( 19) 00:10:19.483 11.962 - 12.017: 0.7673% ( 21) 00:10:19.483 12.017 - 12.073: 1.1352% ( 35) 00:10:19.483 12.073 - 12.129: 1.7028% ( 54) 00:10:19.483 12.129 - 12.185: 2.6067% ( 86) 00:10:19.483 12.185 - 12.241: 3.5632% ( 91) 00:10:19.483 12.241 - 12.297: 5.2134% ( 157) 00:10:19.483 12.297 - 12.353: 6.9582% ( 166) 00:10:19.483 12.353 - 12.409: 8.8816% ( 183) 00:10:19.483 12.409 - 12.465: 11.2045% ( 221) 00:10:19.483 12.465 - 12.521: 13.7166% ( 239) 00:10:19.483 12.521 - 12.576: 16.5230% ( 267) 00:10:19.483 12.576 - 12.632: 19.6447% ( 297) 00:10:19.483 12.632 - 12.688: 22.9346% ( 313) 00:10:19.483 12.688 - 12.744: 26.2981% ( 320) 00:10:19.483 12.744 - 12.800: 29.7456% ( 328) 00:10:19.483 12.800 - 12.856: 33.0145% ( 311) 00:10:19.483 12.856 - 12.912: 36.4305% ( 325) 00:10:19.483 12.912 - 12.968: 40.3826% ( 376) 00:10:19.483 12.968 - 13.024: 44.0193% ( 346) 00:10:19.483 13.024 - 13.079: 47.4038% ( 322) 00:10:19.483 13.079 - 13.135: 50.9144% ( 334) 00:10:19.483 13.135 - 13.191: 53.8785% ( 282) 00:10:19.483 13.191 - 13.247: 57.1474% ( 311) 00:10:19.483 13.247 - 13.303: 60.4057% ( 310) 00:10:19.483 13.303 - 13.359: 63.6010% ( 304) 00:10:19.483 13.359 - 13.415: 65.9659% ( 225) 00:10:19.483 13.415 - 13.471: 68.2153% ( 214) 00:10:19.483 13.471 - 13.527: 70.2859% ( 197) 00:10:19.483 13.527 - 13.583: 72.1463% ( 177) 00:10:19.483 13.583 - 13.638: 73.9542% ( 172) 00:10:19.483 13.638 - 13.694: 75.4677% ( 144) 00:10:19.483 13.694 - 13.750: 76.6555% ( 113) 00:10:19.483 13.750 - 13.806: 77.9483% ( 123) 00:10:19.483 13.806 - 13.862: 78.9994% ( 100) 00:10:19.483 13.862 - 13.918: 80.0820% ( 103) 00:10:19.483 13.918 - 13.974: 80.9964% ( 87) 00:10:19.483 13.974 - 14.030: 81.9004% ( 86) 00:10:19.483 14.030 - 14.086: 82.6151% ( 68) 00:10:19.483 14.086 - 14.141: 83.2878% ( 64) 00:10:19.483 14.141 - 14.197: 84.1181% ( 79) 00:10:19.483 14.197 - 14.253: 84.6752% ( 53) 00:10:19.483 14.253 - 14.309: 85.1902% ( 49) 00:10:19.483 14.309 - 14.421: 85.9365% ( 71) 00:10:19.483 14.421 - 14.533: 86.5672% ( 60) 00:10:19.483 14.533 - 14.645: 87.1768% ( 58) 00:10:19.483 14.645 - 14.756: 87.5972% ( 40) 00:10:19.483 14.756 - 14.868: 87.8180% ( 21) 00:10:19.483 14.868 - 14.980: 88.0492% ( 22) 00:10:19.483 14.980 - 15.092: 88.2384% ( 18) 00:10:19.483 15.092 - 15.203: 88.3855% ( 14) 00:10:19.483 15.203 - 15.315: 88.6378% ( 24) 00:10:19.483 15.315 - 15.427: 88.9426% ( 29) 00:10:19.483 15.427 - 15.539: 89.5522% ( 58) 00:10:19.483 15.539 - 15.651: 90.3406% ( 75) 00:10:19.483 15.651 - 15.762: 91.1604% ( 78) 00:10:19.483 15.762 - 15.874: 91.7805% ( 59) 00:10:19.483 15.874 - 15.986: 92.5058% ( 69) 00:10:19.483 15.986 - 16.098: 92.9262% ( 40) 00:10:19.483 16.098 - 16.210: 93.2415% ( 30) 00:10:19.483 16.210 - 16.321: 93.5674% ( 31) 00:10:19.483 16.321 - 16.433: 93.7776% ( 20) 00:10:19.483 16.433 - 16.545: 93.9983% ( 21) 00:10:19.483 16.545 - 16.657: 94.1560% ( 15) 00:10:19.483 16.657 - 16.769: 94.2716% ( 11) 00:10:19.483 16.769 - 16.880: 94.3557% ( 8) 00:10:19.483 16.880 - 16.992: 94.4293% ( 7) 00:10:19.483 16.992 - 17.104: 94.5344% ( 10) 00:10:19.483 17.104 - 17.216: 94.6185% ( 8) 00:10:19.483 17.216 - 17.328: 94.6920% ( 7) 00:10:19.483 17.328 - 17.439: 94.8182% ( 12) 00:10:19.483 17.439 - 17.551: 94.9863% ( 16) 00:10:19.483 17.551 - 17.663: 95.0599% ( 7) 00:10:19.483 17.663 - 17.775: 95.1966% ( 13) 00:10:19.483 17.775 - 17.886: 95.3752% ( 17) 00:10:19.483 17.886 - 17.998: 95.6065% ( 22) 00:10:19.483 17.998 - 18.110: 95.7536% ( 14) 00:10:19.483 18.110 - 18.222: 95.9849% ( 22) 00:10:19.483 18.222 - 18.334: 96.1425% ( 15) 00:10:19.483 18.334 - 18.445: 96.3002% ( 15) 00:10:19.483 18.445 - 18.557: 96.4789% ( 17) 00:10:19.483 18.557 - 18.669: 96.6681% ( 18) 00:10:19.483 18.669 - 18.781: 96.8257% ( 15) 00:10:19.483 18.781 - 18.893: 96.9519% ( 12) 00:10:19.483 18.893 - 19.004: 97.1095% ( 15) 00:10:19.483 19.004 - 19.116: 97.2462% ( 13) 00:10:19.483 19.116 - 19.228: 97.3408% ( 9) 00:10:19.483 19.228 - 19.340: 97.4564% ( 11) 00:10:19.483 19.340 - 19.452: 97.5615% ( 10) 00:10:19.483 19.452 - 19.563: 97.6666% ( 10) 00:10:19.483 19.563 - 19.675: 97.7402% ( 7) 00:10:19.483 19.675 - 19.787: 97.8243% ( 8) 00:10:19.483 19.787 - 19.899: 97.8978% ( 7) 00:10:19.483 19.899 - 20.010: 97.9294% ( 3) 00:10:19.483 20.010 - 20.122: 97.9819% ( 5) 00:10:19.483 20.122 - 20.234: 98.0765% ( 9) 00:10:19.483 20.234 - 20.346: 98.1396% ( 6) 00:10:19.483 20.346 - 20.458: 98.2447% ( 10) 00:10:19.483 20.458 - 20.569: 98.3603% ( 11) 00:10:19.483 20.569 - 20.681: 98.4759% ( 11) 00:10:19.483 20.681 - 20.793: 98.5180% ( 4) 00:10:19.483 20.793 - 20.905: 98.5810% ( 6) 00:10:19.483 20.905 - 21.017: 98.6126% ( 3) 00:10:19.483 21.017 - 21.128: 98.6967% ( 8) 00:10:19.483 21.128 - 21.240: 98.7492% ( 5) 00:10:19.483 21.240 - 21.352: 98.8228% ( 7) 00:10:19.483 21.352 - 21.464: 98.8543% ( 3) 00:10:19.483 21.464 - 21.576: 98.8859% ( 3) 00:10:19.483 21.576 - 21.687: 98.8964% ( 1) 00:10:19.483 21.687 - 21.799: 98.9489% ( 5) 00:10:19.483 21.799 - 21.911: 98.9594% ( 1) 00:10:19.483 21.911 - 22.023: 98.9804% ( 2) 00:10:19.483 22.023 - 22.134: 99.0225% ( 4) 00:10:19.483 22.134 - 22.246: 99.0856% ( 6) 00:10:19.483 22.246 - 22.358: 99.1066% ( 2) 00:10:19.483 22.358 - 22.470: 99.1591% ( 5) 00:10:19.483 22.470 - 22.582: 99.2012% ( 4) 00:10:19.483 22.582 - 22.693: 99.2432% ( 4) 00:10:19.483 22.693 - 22.805: 99.2537% ( 1) 00:10:19.483 22.805 - 22.917: 99.2748% ( 2) 00:10:19.483 22.917 - 23.029: 99.2958% ( 2) 00:10:19.483 23.029 - 23.141: 99.3063% ( 1) 00:10:19.483 23.141 - 23.252: 99.3273% ( 2) 00:10:19.483 23.252 - 23.364: 99.3588% ( 3) 00:10:19.483 23.364 - 23.476: 99.3694% ( 1) 00:10:19.483 23.476 - 23.588: 99.3904% ( 2) 00:10:19.483 23.588 - 23.700: 99.4009% ( 1) 00:10:19.483 23.700 - 23.811: 99.4324% ( 3) 00:10:19.483 23.811 - 23.923: 99.4745% ( 4) 00:10:19.483 23.923 - 24.035: 99.4850% ( 1) 00:10:19.483 24.035 - 24.147: 99.5165% ( 3) 00:10:19.483 24.259 - 24.370: 99.5480% ( 3) 00:10:19.483 24.370 - 24.482: 99.5796% ( 3) 00:10:19.483 24.482 - 24.594: 99.5901% ( 1) 00:10:19.483 24.594 - 24.706: 99.6006% ( 1) 00:10:19.483 24.817 - 24.929: 99.6216% ( 2) 00:10:19.483 25.600 - 25.712: 99.6426% ( 2) 00:10:19.483 25.712 - 25.824: 99.6531% ( 1) 00:10:19.483 26.047 - 26.159: 99.6637% ( 1) 00:10:19.483 26.271 - 26.383: 99.6742% ( 1) 00:10:19.484 26.606 - 26.718: 99.6952% ( 2) 00:10:19.484 27.724 - 27.836: 99.7057% ( 1) 00:10:19.484 28.507 - 28.618: 99.7267% ( 2) 00:10:19.484 28.618 - 28.842: 99.7583% ( 3) 00:10:19.484 28.842 - 29.066: 99.8108% ( 5) 00:10:19.484 29.066 - 29.289: 99.8318% ( 2) 00:10:19.484 29.289 - 29.513: 99.8739% ( 4) 00:10:19.484 29.513 - 29.736: 99.8844% ( 1) 00:10:19.484 29.960 - 30.183: 99.8949% ( 1) 00:10:19.484 30.407 - 30.631: 99.9054% ( 1) 00:10:19.484 30.854 - 31.078: 99.9159% ( 1) 00:10:19.484 33.984 - 34.208: 99.9369% ( 2) 00:10:19.484 34.655 - 34.879: 99.9474% ( 1) 00:10:19.484 38.456 - 38.679: 99.9580% ( 1) 00:10:19.484 39.127 - 39.350: 99.9685% ( 1) 00:10:19.484 39.574 - 39.797: 99.9790% ( 1) 00:10:19.484 55.224 - 55.448: 99.9895% ( 1) 00:10:19.484 192.279 - 193.174: 100.0000% ( 1) 00:10:19.484 00:10:19.484 Complete histogram 00:10:19.484 ================== 00:10:19.484 Range in us Cumulative Count 00:10:19.484 6.316 - 6.344: 0.0105% ( 1) 00:10:19.484 6.344 - 6.372: 0.0315% ( 2) 00:10:19.484 6.372 - 6.400: 0.0526% ( 2) 00:10:19.484 6.400 - 6.428: 0.1261% ( 7) 00:10:19.484 6.428 - 6.456: 0.1682% ( 4) 00:10:19.484 6.456 - 6.484: 0.2102% ( 4) 00:10:19.484 6.484 - 6.512: 0.3153% ( 10) 00:10:19.484 6.512 - 6.540: 0.3574% ( 4) 00:10:19.484 6.540 - 6.568: 0.4309% ( 7) 00:10:19.484 6.568 - 6.596: 0.4835% ( 5) 00:10:19.484 6.596 - 6.624: 0.5150% ( 3) 00:10:19.484 6.624 - 6.652: 0.6832% ( 16) 00:10:19.484 6.652 - 6.679: 0.8198% ( 13) 00:10:19.484 6.679 - 6.707: 0.9460% ( 12) 00:10:19.484 6.707 - 6.735: 1.2193% ( 26) 00:10:19.484 6.735 - 6.763: 1.9340% ( 68) 00:10:19.484 6.763 - 6.791: 2.8274% ( 85) 00:10:19.484 6.791 - 6.819: 4.5932% ( 168) 00:10:19.484 6.819 - 6.847: 6.6954% ( 200) 00:10:19.484 6.847 - 6.875: 8.8606% ( 206) 00:10:19.484 6.875 - 6.903: 11.4042% ( 242) 00:10:19.484 6.903 - 6.931: 13.6746% ( 216) 00:10:19.484 6.931 - 6.959: 15.5876% ( 182) 00:10:19.484 6.959 - 6.987: 17.7633% ( 207) 00:10:19.484 6.987 - 7.015: 19.8129% ( 195) 00:10:19.484 7.015 - 7.043: 21.4631% ( 157) 00:10:19.484 7.043 - 7.071: 23.2499% ( 170) 00:10:19.484 7.071 - 7.099: 25.0473% ( 171) 00:10:19.484 7.099 - 7.127: 26.8657% ( 173) 00:10:19.484 7.127 - 7.155: 28.7681% ( 181) 00:10:19.484 7.155 - 7.210: 32.1421% ( 321) 00:10:19.484 7.210 - 7.266: 35.2533% ( 296) 00:10:19.484 7.266 - 7.322: 37.9546% ( 257) 00:10:19.484 7.322 - 7.378: 40.6559% ( 257) 00:10:19.484 7.378 - 7.434: 43.5884% ( 279) 00:10:19.484 7.434 - 7.490: 46.2266% ( 251) 00:10:19.484 7.490 - 7.546: 49.3904% ( 301) 00:10:19.484 7.546 - 7.602: 53.4581% ( 387) 00:10:19.484 7.602 - 7.658: 59.2811% ( 554) 00:10:19.484 7.658 - 7.714: 64.9569% ( 540) 00:10:19.484 7.714 - 7.769: 70.3910% ( 517) 00:10:19.484 7.769 - 7.825: 74.1013% ( 353) 00:10:19.484 7.825 - 7.881: 76.8762% ( 264) 00:10:19.484 7.881 - 7.937: 78.7681% ( 180) 00:10:19.484 7.937 - 7.993: 80.1661% ( 133) 00:10:19.484 7.993 - 8.049: 81.2066% ( 99) 00:10:19.484 8.049 - 8.105: 82.0790% ( 83) 00:10:19.484 8.105 - 8.161: 82.8568% ( 74) 00:10:19.484 8.161 - 8.217: 83.6662% ( 77) 00:10:19.484 8.217 - 8.272: 84.5701% ( 86) 00:10:19.484 8.272 - 8.328: 85.4005% ( 79) 00:10:19.484 8.328 - 8.384: 86.2518% ( 81) 00:10:19.484 8.384 - 8.440: 86.9771% ( 69) 00:10:19.484 8.440 - 8.496: 87.5552% ( 55) 00:10:19.484 8.496 - 8.552: 88.0492% ( 47) 00:10:19.484 8.552 - 8.608: 88.3120% ( 25) 00:10:19.484 8.608 - 8.664: 88.5432% ( 22) 00:10:19.484 8.664 - 8.720: 88.7639% ( 21) 00:10:19.484 8.720 - 8.776: 88.8690% ( 10) 00:10:19.484 8.776 - 8.831: 88.9321% ( 6) 00:10:19.484 8.831 - 8.887: 89.0267% ( 9) 00:10:19.484 8.887 - 8.943: 89.0898% ( 6) 00:10:19.484 8.943 - 8.999: 89.1423% ( 5) 00:10:19.484 8.999 - 9.055: 89.2054% ( 6) 00:10:19.484 9.055 - 9.111: 89.2790% ( 7) 00:10:19.484 9.111 - 9.167: 89.3841% ( 10) 00:10:19.484 9.167 - 9.223: 89.4156% ( 3) 00:10:19.484 9.223 - 9.279: 89.4997% ( 8) 00:10:19.484 9.279 - 9.334: 89.5838% ( 8) 00:10:19.484 9.334 - 9.390: 89.6468% ( 6) 00:10:19.484 9.390 - 9.446: 89.7099% ( 6) 00:10:19.484 9.446 - 9.502: 89.7940% ( 8) 00:10:19.484 9.502 - 9.558: 89.9622% ( 16) 00:10:19.484 9.558 - 9.614: 90.3300% ( 35) 00:10:19.484 9.614 - 9.670: 91.2340% ( 86) 00:10:19.484 9.670 - 9.726: 92.3481% ( 106) 00:10:19.484 9.726 - 9.782: 93.6725% ( 126) 00:10:19.484 9.782 - 9.838: 94.5554% ( 84) 00:10:19.484 9.838 - 9.893: 95.3332% ( 74) 00:10:19.484 9.893 - 9.949: 95.8482% ( 49) 00:10:19.484 9.949 - 10.005: 96.2161% ( 35) 00:10:19.484 10.005 - 10.061: 96.4368% ( 21) 00:10:19.484 10.061 - 10.117: 96.6155% ( 17) 00:10:19.484 10.117 - 10.173: 96.7732% ( 15) 00:10:19.484 10.173 - 10.229: 96.8993% ( 12) 00:10:19.484 10.229 - 10.285: 97.0570% ( 15) 00:10:19.484 10.285 - 10.341: 97.1095% ( 5) 00:10:19.484 10.341 - 10.397: 97.1936% ( 8) 00:10:19.484 10.397 - 10.452: 97.2777% ( 8) 00:10:19.484 10.452 - 10.508: 97.3092% ( 3) 00:10:19.484 10.508 - 10.564: 97.3933% ( 8) 00:10:19.484 10.564 - 10.620: 97.4354% ( 4) 00:10:19.484 10.620 - 10.676: 97.4879% ( 5) 00:10:19.484 10.676 - 10.732: 97.4984% ( 1) 00:10:19.484 10.732 - 10.788: 97.5194% ( 2) 00:10:19.484 10.788 - 10.844: 97.5300% ( 1) 00:10:19.484 10.844 - 10.900: 97.5510% ( 2) 00:10:19.484 10.900 - 10.955: 97.5825% ( 3) 00:10:19.484 10.955 - 11.011: 97.6035% ( 2) 00:10:19.484 11.011 - 11.067: 97.6140% ( 1) 00:10:19.484 11.123 - 11.179: 97.6246% ( 1) 00:10:19.484 11.235 - 11.291: 97.6351% ( 1) 00:10:19.484 11.347 - 11.403: 97.6561% ( 2) 00:10:19.484 11.403 - 11.459: 97.6666% ( 1) 00:10:19.484 11.459 - 11.514: 97.6771% ( 1) 00:10:19.484 11.738 - 11.794: 97.6876% ( 1) 00:10:19.484 11.850 - 11.906: 97.6981% ( 1) 00:10:19.484 11.906 - 11.962: 97.7086% ( 1) 00:10:19.484 12.129 - 12.185: 97.7297% ( 2) 00:10:19.484 12.297 - 12.353: 97.7507% ( 2) 00:10:19.484 12.409 - 12.465: 97.7612% ( 1) 00:10:19.484 12.465 - 12.521: 97.8032% ( 4) 00:10:19.484 12.521 - 12.576: 97.8137% ( 1) 00:10:19.484 12.576 - 12.632: 97.8348% ( 2) 00:10:19.484 12.632 - 12.688: 97.8453% ( 1) 00:10:19.484 12.688 - 12.744: 97.8663% ( 2) 00:10:19.484 12.744 - 12.800: 97.8768% ( 1) 00:10:19.484 12.800 - 12.856: 97.9294% ( 5) 00:10:19.484 12.912 - 12.968: 97.9504% ( 2) 00:10:19.484 12.968 - 13.024: 97.9609% ( 1) 00:10:19.484 13.024 - 13.079: 97.9819% ( 2) 00:10:19.484 13.079 - 13.135: 97.9924% ( 1) 00:10:19.484 13.135 - 13.191: 98.0135% ( 2) 00:10:19.484 13.191 - 13.247: 98.0450% ( 3) 00:10:19.484 13.247 - 13.303: 98.0555% ( 1) 00:10:19.484 13.303 - 13.359: 98.0660% ( 1) 00:10:19.484 13.359 - 13.415: 98.1291% ( 6) 00:10:19.484 13.415 - 13.471: 98.1396% ( 1) 00:10:19.484 13.471 - 13.527: 98.1816% ( 4) 00:10:19.484 13.527 - 13.583: 98.2342% ( 5) 00:10:19.484 13.583 - 13.638: 98.2552% ( 2) 00:10:19.484 13.638 - 13.694: 98.2762% ( 2) 00:10:19.484 13.694 - 13.750: 98.3078% ( 3) 00:10:19.484 13.750 - 13.806: 98.3603% ( 5) 00:10:19.484 13.806 - 13.862: 98.3813% ( 2) 00:10:19.484 13.862 - 13.918: 98.4234% ( 4) 00:10:19.484 13.974 - 14.030: 98.4444% ( 2) 00:10:19.484 14.030 - 14.086: 98.4549% ( 1) 00:10:19.484 14.086 - 14.141: 98.4864% ( 3) 00:10:19.484 14.197 - 14.253: 98.4970% ( 1) 00:10:19.484 14.253 - 14.309: 98.5285% ( 3) 00:10:19.484 14.309 - 14.421: 98.5600% ( 3) 00:10:19.484 14.421 - 14.533: 98.5810% ( 2) 00:10:19.484 14.645 - 14.756: 98.6336% ( 5) 00:10:19.484 14.756 - 14.868: 98.6546% ( 2) 00:10:19.484 14.868 - 14.980: 98.6651% ( 1) 00:10:19.484 14.980 - 15.092: 98.7072% ( 4) 00:10:19.484 15.203 - 15.315: 98.7387% ( 3) 00:10:19.484 15.315 - 15.427: 98.7807% ( 4) 00:10:19.484 15.427 - 15.539: 98.8228% ( 4) 00:10:19.484 15.539 - 15.651: 98.8438% ( 2) 00:10:19.484 15.651 - 15.762: 98.8859% ( 4) 00:10:19.484 15.762 - 15.874: 98.9069% ( 2) 00:10:19.484 15.874 - 15.986: 98.9699% ( 6) 00:10:19.484 15.986 - 16.098: 99.0015% ( 3) 00:10:19.484 16.098 - 16.210: 99.0225% ( 2) 00:10:19.484 16.321 - 16.433: 99.0435% ( 2) 00:10:19.484 16.433 - 16.545: 99.0540% ( 1) 00:10:19.484 16.545 - 16.657: 99.0856% ( 3) 00:10:19.484 16.657 - 16.769: 99.1066% ( 2) 00:10:19.484 16.880 - 16.992: 99.1276% ( 2) 00:10:19.484 17.104 - 17.216: 99.1486% ( 2) 00:10:19.484 17.663 - 17.775: 99.1802% ( 3) 00:10:19.484 17.775 - 17.886: 99.2117% ( 3) 00:10:19.484 17.886 - 17.998: 99.2432% ( 3) 00:10:19.484 17.998 - 18.110: 99.2958% ( 5) 00:10:19.485 18.110 - 18.222: 99.3168% ( 2) 00:10:19.485 18.334 - 18.445: 99.3799% ( 6) 00:10:19.485 18.445 - 18.557: 99.3904% ( 1) 00:10:19.485 18.557 - 18.669: 99.4324% ( 4) 00:10:19.485 18.669 - 18.781: 99.4429% ( 1) 00:10:19.485 18.781 - 18.893: 99.4745% ( 3) 00:10:19.485 19.004 - 19.116: 99.5060% ( 3) 00:10:19.485 19.563 - 19.675: 99.5165% ( 1) 00:10:19.485 21.017 - 21.128: 99.5270% ( 1) 00:10:19.485 21.352 - 21.464: 99.5375% ( 1) 00:10:19.485 22.582 - 22.693: 99.5480% ( 1) 00:10:19.485 22.917 - 23.029: 99.5796% ( 3) 00:10:19.485 23.141 - 23.252: 99.6006% ( 2) 00:10:19.485 23.252 - 23.364: 99.6216% ( 2) 00:10:19.485 23.364 - 23.476: 99.6426% ( 2) 00:10:19.485 23.476 - 23.588: 99.6742% ( 3) 00:10:19.485 23.588 - 23.700: 99.6847% ( 1) 00:10:19.485 23.700 - 23.811: 99.6952% ( 1) 00:10:19.485 23.811 - 23.923: 99.7057% ( 1) 00:10:19.485 23.923 - 24.035: 99.7372% ( 3) 00:10:19.485 24.035 - 24.147: 99.7477% ( 1) 00:10:19.485 24.259 - 24.370: 99.7583% ( 1) 00:10:19.485 24.370 - 24.482: 99.7793% ( 2) 00:10:19.485 24.594 - 24.706: 99.7898% ( 1) 00:10:19.485 24.817 - 24.929: 99.8003% ( 1) 00:10:19.485 25.041 - 25.153: 99.8108% ( 1) 00:10:19.485 25.265 - 25.376: 99.8213% ( 1) 00:10:19.485 25.376 - 25.488: 99.8318% ( 1) 00:10:19.485 25.600 - 25.712: 99.8528% ( 2) 00:10:19.485 26.047 - 26.159: 99.8634% ( 1) 00:10:19.485 26.941 - 27.053: 99.8739% ( 1) 00:10:19.485 28.171 - 28.283: 99.8844% ( 1) 00:10:19.485 28.507 - 28.618: 99.8949% ( 1) 00:10:19.485 30.854 - 31.078: 99.9054% ( 1) 00:10:19.485 31.748 - 31.972: 99.9159% ( 1) 00:10:19.485 31.972 - 32.196: 99.9264% ( 1) 00:10:19.485 33.761 - 33.984: 99.9369% ( 1) 00:10:19.485 34.208 - 34.431: 99.9474% ( 1) 00:10:19.485 35.773 - 35.997: 99.9580% ( 1) 00:10:19.485 38.009 - 38.232: 99.9685% ( 1) 00:10:19.485 39.127 - 39.350: 99.9790% ( 1) 00:10:19.485 40.915 - 41.139: 99.9895% ( 1) 00:10:19.485 41.362 - 41.586: 100.0000% ( 1) 00:10:19.485 00:10:19.485 00:10:19.485 real 0m1.235s 00:10:19.485 user 0m1.071s 00:10:19.485 sys 0m0.117s 00:10:19.485 18:28:19 nvme.nvme_overhead -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:19.485 18:28:19 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:10:19.485 ************************************ 00:10:19.485 END TEST nvme_overhead 00:10:19.485 ************************************ 00:10:19.485 18:28:19 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:10:19.485 18:28:19 nvme -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:10:19.485 18:28:19 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:19.485 18:28:19 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:19.485 ************************************ 00:10:19.485 START TEST nvme_arbitration 00:10:19.485 ************************************ 00:10:19.485 18:28:19 nvme.nvme_arbitration -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:10:22.779 Initializing NVMe Controllers 00:10:22.779 Attached to 0000:00:10.0 00:10:22.779 Attached to 0000:00:11.0 00:10:22.779 Attached to 0000:00:13.0 00:10:22.779 Attached to 0000:00:12.0 00:10:22.779 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:10:22.779 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:10:22.779 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:10:22.779 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:10:22.779 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:10:22.779 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:10:22.779 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:10:22.779 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:10:22.779 Initialization complete. Launching workers. 00:10:22.779 Starting thread on core 1 with urgent priority queue 00:10:22.779 Starting thread on core 2 with urgent priority queue 00:10:22.779 Starting thread on core 3 with urgent priority queue 00:10:22.779 Starting thread on core 0 with urgent priority queue 00:10:22.779 QEMU NVMe Ctrl (12340 ) core 0: 3669.33 IO/s 27.25 secs/100000 ios 00:10:22.779 QEMU NVMe Ctrl (12342 ) core 0: 3669.33 IO/s 27.25 secs/100000 ios 00:10:22.779 QEMU NVMe Ctrl (12341 ) core 1: 3733.33 IO/s 26.79 secs/100000 ios 00:10:22.779 QEMU NVMe Ctrl (12342 ) core 1: 3733.33 IO/s 26.79 secs/100000 ios 00:10:22.779 QEMU NVMe Ctrl (12343 ) core 2: 3562.67 IO/s 28.07 secs/100000 ios 00:10:22.779 QEMU NVMe Ctrl (12342 ) core 3: 3605.33 IO/s 27.74 secs/100000 ios 00:10:22.779 ======================================================== 00:10:22.779 00:10:22.779 00:10:22.779 real 0m3.252s 00:10:22.779 user 0m9.039s 00:10:22.779 sys 0m0.126s 00:10:22.779 18:28:22 nvme.nvme_arbitration -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:22.779 18:28:22 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:10:22.779 ************************************ 00:10:22.779 END TEST nvme_arbitration 00:10:22.779 ************************************ 00:10:22.779 18:28:22 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:10:22.779 18:28:22 nvme -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:10:22.779 18:28:22 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:22.779 18:28:22 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:22.779 ************************************ 00:10:22.779 START TEST nvme_single_aen 00:10:22.779 ************************************ 00:10:22.779 18:28:22 nvme.nvme_single_aen -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:10:23.038 Asynchronous Event Request test 00:10:23.038 Attached to 0000:00:10.0 00:10:23.038 Attached to 0000:00:11.0 00:10:23.038 Attached to 0000:00:13.0 00:10:23.038 Attached to 0000:00:12.0 00:10:23.038 Reset controller to setup AER completions for this process 00:10:23.038 Registering asynchronous event callbacks... 00:10:23.038 Getting orig temperature thresholds of all controllers 00:10:23.038 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:23.038 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:23.038 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:23.038 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:23.038 Setting all controllers temperature threshold low to trigger AER 00:10:23.038 Waiting for all controllers temperature threshold to be set lower 00:10:23.038 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:23.038 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:10:23.038 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:23.038 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:10:23.038 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:23.038 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:10:23.038 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:23.038 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:10:23.038 Waiting for all controllers to trigger AER and reset threshold 00:10:23.038 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:23.038 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:23.038 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:23.038 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:23.038 Cleaning up... 00:10:23.038 00:10:23.038 real 0m0.223s 00:10:23.038 user 0m0.069s 00:10:23.038 sys 0m0.108s 00:10:23.038 18:28:22 nvme.nvme_single_aen -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:23.038 18:28:22 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:10:23.038 ************************************ 00:10:23.038 END TEST nvme_single_aen 00:10:23.038 ************************************ 00:10:23.038 18:28:22 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:10:23.038 18:28:22 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:23.038 18:28:22 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:23.038 18:28:22 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:23.038 ************************************ 00:10:23.038 START TEST nvme_doorbell_aers 00:10:23.038 ************************************ 00:10:23.038 18:28:23 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1121 -- # nvme_doorbell_aers 00:10:23.038 18:28:23 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:10:23.038 18:28:23 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:10:23.038 18:28:23 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:10:23.038 18:28:23 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:10:23.038 18:28:23 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1509 -- # bdfs=() 00:10:23.038 18:28:23 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1509 -- # local bdfs 00:10:23.038 18:28:23 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:23.038 18:28:23 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:23.038 18:28:23 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:10:23.297 18:28:23 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1511 -- # (( 4 == 0 )) 00:10:23.297 18:28:23 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:23.297 18:28:23 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:23.297 18:28:23 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:10:23.297 [2024-07-23 18:28:23.345796] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80925) is not found. Dropping the request. 00:10:33.277 Executing: test_write_invalid_db 00:10:33.277 Waiting for AER completion... 00:10:33.277 Failure: test_write_invalid_db 00:10:33.277 00:10:33.277 Executing: test_invalid_db_write_overflow_sq 00:10:33.277 Waiting for AER completion... 00:10:33.277 Failure: test_invalid_db_write_overflow_sq 00:10:33.277 00:10:33.277 Executing: test_invalid_db_write_overflow_cq 00:10:33.277 Waiting for AER completion... 00:10:33.277 Failure: test_invalid_db_write_overflow_cq 00:10:33.277 00:10:33.277 18:28:33 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:33.277 18:28:33 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:10:33.535 [2024-07-23 18:28:33.370540] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80925) is not found. Dropping the request. 00:10:43.570 Executing: test_write_invalid_db 00:10:43.570 Waiting for AER completion... 00:10:43.570 Failure: test_write_invalid_db 00:10:43.570 00:10:43.570 Executing: test_invalid_db_write_overflow_sq 00:10:43.570 Waiting for AER completion... 00:10:43.570 Failure: test_invalid_db_write_overflow_sq 00:10:43.570 00:10:43.570 Executing: test_invalid_db_write_overflow_cq 00:10:43.570 Waiting for AER completion... 00:10:43.570 Failure: test_invalid_db_write_overflow_cq 00:10:43.570 00:10:43.570 18:28:43 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:43.570 18:28:43 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:10:43.570 [2024-07-23 18:28:43.398196] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80925) is not found. Dropping the request. 00:10:53.557 Executing: test_write_invalid_db 00:10:53.557 Waiting for AER completion... 00:10:53.557 Failure: test_write_invalid_db 00:10:53.557 00:10:53.558 Executing: test_invalid_db_write_overflow_sq 00:10:53.558 Waiting for AER completion... 00:10:53.558 Failure: test_invalid_db_write_overflow_sq 00:10:53.558 00:10:53.558 Executing: test_invalid_db_write_overflow_cq 00:10:53.558 Waiting for AER completion... 00:10:53.558 Failure: test_invalid_db_write_overflow_cq 00:10:53.558 00:10:53.558 18:28:53 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:53.558 18:28:53 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:10:53.558 [2024-07-23 18:28:53.447321] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80925) is not found. Dropping the request. 00:11:03.538 Executing: test_write_invalid_db 00:11:03.538 Waiting for AER completion... 00:11:03.538 Failure: test_write_invalid_db 00:11:03.538 00:11:03.538 Executing: test_invalid_db_write_overflow_sq 00:11:03.538 Waiting for AER completion... 00:11:03.538 Failure: test_invalid_db_write_overflow_sq 00:11:03.538 00:11:03.538 Executing: test_invalid_db_write_overflow_cq 00:11:03.538 Waiting for AER completion... 00:11:03.538 Failure: test_invalid_db_write_overflow_cq 00:11:03.538 00:11:03.538 00:11:03.538 real 0m40.266s 00:11:03.538 user 0m36.089s 00:11:03.538 sys 0m3.831s 00:11:03.538 18:29:03 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:03.538 18:29:03 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:11:03.538 ************************************ 00:11:03.538 END TEST nvme_doorbell_aers 00:11:03.538 ************************************ 00:11:03.538 18:29:03 nvme -- nvme/nvme.sh@97 -- # uname 00:11:03.538 18:29:03 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:11:03.538 18:29:03 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:11:03.538 18:29:03 nvme -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:11:03.538 18:29:03 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:03.538 18:29:03 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:03.538 ************************************ 00:11:03.538 START TEST nvme_multi_aen 00:11:03.538 ************************************ 00:11:03.538 18:29:03 nvme.nvme_multi_aen -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:11:03.538 [2024-07-23 18:29:03.519186] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80925) is not found. Dropping the request. 00:11:03.538 [2024-07-23 18:29:03.519441] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80925) is not found. Dropping the request. 00:11:03.538 [2024-07-23 18:29:03.519495] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80925) is not found. Dropping the request. 00:11:03.538 [2024-07-23 18:29:03.521001] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80925) is not found. Dropping the request. 00:11:03.538 [2024-07-23 18:29:03.521114] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80925) is not found. Dropping the request. 00:11:03.538 [2024-07-23 18:29:03.521165] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80925) is not found. Dropping the request. 00:11:03.538 [2024-07-23 18:29:03.522416] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80925) is not found. Dropping the request. 00:11:03.538 [2024-07-23 18:29:03.522519] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80925) is not found. Dropping the request. 00:11:03.538 [2024-07-23 18:29:03.522600] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80925) is not found. Dropping the request. 00:11:03.538 [2024-07-23 18:29:03.523705] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80925) is not found. Dropping the request. 00:11:03.538 [2024-07-23 18:29:03.523796] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80925) is not found. Dropping the request. 00:11:03.538 [2024-07-23 18:29:03.523867] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80925) is not found. Dropping the request. 00:11:03.538 Child process pid: 81446 00:11:03.796 [Child] Asynchronous Event Request test 00:11:03.796 [Child] Attached to 0000:00:10.0 00:11:03.796 [Child] Attached to 0000:00:11.0 00:11:03.796 [Child] Attached to 0000:00:13.0 00:11:03.796 [Child] Attached to 0000:00:12.0 00:11:03.796 [Child] Registering asynchronous event callbacks... 00:11:03.796 [Child] Getting orig temperature thresholds of all controllers 00:11:03.796 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:03.796 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:03.796 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:03.796 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:03.796 [Child] Waiting for all controllers to trigger AER and reset threshold 00:11:03.797 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:03.797 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:03.797 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:03.797 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:03.797 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:03.797 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:03.797 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:03.797 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:03.797 [Child] Cleaning up... 00:11:03.797 Asynchronous Event Request test 00:11:03.797 Attached to 0000:00:10.0 00:11:03.797 Attached to 0000:00:11.0 00:11:03.797 Attached to 0000:00:13.0 00:11:03.797 Attached to 0000:00:12.0 00:11:03.797 Reset controller to setup AER completions for this process 00:11:03.797 Registering asynchronous event callbacks... 00:11:03.797 Getting orig temperature thresholds of all controllers 00:11:03.797 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:03.797 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:03.797 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:03.797 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:03.797 Setting all controllers temperature threshold low to trigger AER 00:11:03.797 Waiting for all controllers temperature threshold to be set lower 00:11:03.797 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:03.797 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:11:03.797 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:03.797 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:11:03.797 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:03.797 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:11:03.797 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:03.797 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:11:03.797 Waiting for all controllers to trigger AER and reset threshold 00:11:03.797 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:03.797 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:03.797 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:03.797 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:03.797 Cleaning up... 00:11:03.797 00:11:03.797 real 0m0.447s 00:11:03.797 user 0m0.144s 00:11:03.797 sys 0m0.197s 00:11:03.797 18:29:03 nvme.nvme_multi_aen -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:03.797 18:29:03 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:11:03.797 ************************************ 00:11:03.797 END TEST nvme_multi_aen 00:11:03.797 ************************************ 00:11:03.797 18:29:03 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:11:03.797 18:29:03 nvme -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:11:03.797 18:29:03 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:03.797 18:29:03 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:03.797 ************************************ 00:11:03.797 START TEST nvme_startup 00:11:03.797 ************************************ 00:11:03.797 18:29:03 nvme.nvme_startup -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:11:04.055 Initializing NVMe Controllers 00:11:04.055 Attached to 0000:00:10.0 00:11:04.055 Attached to 0000:00:11.0 00:11:04.055 Attached to 0000:00:13.0 00:11:04.055 Attached to 0000:00:12.0 00:11:04.055 Initialization complete. 00:11:04.055 Time used:135502.469 (us). 00:11:04.055 00:11:04.055 real 0m0.213s 00:11:04.055 user 0m0.078s 00:11:04.055 sys 0m0.088s 00:11:04.055 18:29:04 nvme.nvme_startup -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:04.055 18:29:04 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:11:04.055 ************************************ 00:11:04.056 END TEST nvme_startup 00:11:04.056 ************************************ 00:11:04.056 18:29:04 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:11:04.056 18:29:04 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:04.056 18:29:04 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:04.056 18:29:04 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:04.056 ************************************ 00:11:04.056 START TEST nvme_multi_secondary 00:11:04.056 ************************************ 00:11:04.056 18:29:04 nvme.nvme_multi_secondary -- common/autotest_common.sh@1121 -- # nvme_multi_secondary 00:11:04.056 18:29:04 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=81497 00:11:04.056 18:29:04 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:11:04.056 18:29:04 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=81498 00:11:04.056 18:29:04 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:11:04.056 18:29:04 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:11:07.341 Initializing NVMe Controllers 00:11:07.342 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:07.342 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:07.342 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:07.342 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:07.342 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:11:07.342 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:11:07.342 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:11:07.342 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:11:07.342 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:11:07.342 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:11:07.342 Initialization complete. Launching workers. 00:11:07.342 ======================================================== 00:11:07.342 Latency(us) 00:11:07.342 Device Information : IOPS MiB/s Average min max 00:11:07.342 PCIE (0000:00:10.0) NSID 1 from core 1: 6110.24 23.87 2616.36 1472.47 4845.46 00:11:07.342 PCIE (0000:00:11.0) NSID 1 from core 1: 6110.24 23.87 2617.92 1447.57 4629.35 00:11:07.342 PCIE (0000:00:13.0) NSID 1 from core 1: 6110.24 23.87 2617.90 1581.16 4880.43 00:11:07.342 PCIE (0000:00:12.0) NSID 1 from core 1: 6110.24 23.87 2618.01 1490.87 4780.36 00:11:07.342 PCIE (0000:00:12.0) NSID 2 from core 1: 6110.24 23.87 2617.96 1270.61 4841.19 00:11:07.342 PCIE (0000:00:12.0) NSID 3 from core 1: 6110.24 23.87 2618.07 1227.15 4650.84 00:11:07.342 ======================================================== 00:11:07.342 Total : 36661.45 143.21 2617.70 1227.15 4880.43 00:11:07.342 00:11:07.601 Initializing NVMe Controllers 00:11:07.601 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:07.601 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:07.601 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:07.601 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:07.601 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:11:07.601 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:11:07.601 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:11:07.601 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:11:07.601 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:11:07.601 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:11:07.601 Initialization complete. Launching workers. 00:11:07.601 ======================================================== 00:11:07.601 Latency(us) 00:11:07.601 Device Information : IOPS MiB/s Average min max 00:11:07.601 PCIE (0000:00:10.0) NSID 1 from core 2: 3590.68 14.03 4454.46 1354.37 10701.20 00:11:07.601 PCIE (0000:00:11.0) NSID 1 from core 2: 3590.68 14.03 4455.73 1253.67 11063.75 00:11:07.601 PCIE (0000:00:13.0) NSID 1 from core 2: 3590.68 14.03 4455.47 1258.07 10610.43 00:11:07.601 PCIE (0000:00:12.0) NSID 1 from core 2: 3590.68 14.03 4455.46 1091.62 10641.98 00:11:07.601 PCIE (0000:00:12.0) NSID 2 from core 2: 3590.68 14.03 4455.53 1267.10 10525.49 00:11:07.601 PCIE (0000:00:12.0) NSID 3 from core 2: 3590.68 14.03 4455.95 1254.08 10328.09 00:11:07.601 ======================================================== 00:11:07.601 Total : 21544.11 84.16 4455.43 1091.62 11063.75 00:11:07.601 00:11:07.601 18:29:07 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 81497 00:11:09.507 Initializing NVMe Controllers 00:11:09.507 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:09.507 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:09.507 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:09.507 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:09.507 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:11:09.507 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:11:09.507 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:11:09.507 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:11:09.507 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:11:09.507 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:11:09.507 Initialization complete. Launching workers. 00:11:09.507 ======================================================== 00:11:09.507 Latency(us) 00:11:09.507 Device Information : IOPS MiB/s Average min max 00:11:09.507 PCIE (0000:00:10.0) NSID 1 from core 0: 9703.96 37.91 1647.18 785.87 6372.32 00:11:09.507 PCIE (0000:00:11.0) NSID 1 from core 0: 9704.36 37.91 1648.23 808.74 5405.26 00:11:09.507 PCIE (0000:00:13.0) NSID 1 from core 0: 9704.36 37.91 1648.20 649.07 5976.52 00:11:09.507 PCIE (0000:00:12.0) NSID 1 from core 0: 9704.36 37.91 1648.16 551.76 6110.65 00:11:09.507 PCIE (0000:00:12.0) NSID 2 from core 0: 9704.36 37.91 1648.11 468.71 6469.19 00:11:09.507 PCIE (0000:00:12.0) NSID 3 from core 0: 9704.36 37.91 1648.09 391.75 6820.85 00:11:09.507 ======================================================== 00:11:09.507 Total : 58225.78 227.44 1647.99 391.75 6820.85 00:11:09.507 00:11:09.507 18:29:09 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 81498 00:11:09.507 18:29:09 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=81567 00:11:09.507 18:29:09 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:11:09.507 18:29:09 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=81568 00:11:09.507 18:29:09 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:11:09.507 18:29:09 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:11:12.817 Initializing NVMe Controllers 00:11:12.817 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:12.817 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:12.817 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:12.817 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:12.817 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:11:12.817 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:11:12.817 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:11:12.817 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:11:12.817 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:11:12.817 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:11:12.817 Initialization complete. Launching workers. 00:11:12.817 ======================================================== 00:11:12.817 Latency(us) 00:11:12.817 Device Information : IOPS MiB/s Average min max 00:11:12.817 PCIE (0000:00:10.0) NSID 1 from core 0: 5990.57 23.40 2668.58 832.06 6500.17 00:11:12.817 PCIE (0000:00:11.0) NSID 1 from core 0: 5990.57 23.40 2670.03 832.56 5322.93 00:11:12.817 PCIE (0000:00:13.0) NSID 1 from core 0: 5990.57 23.40 2670.07 857.14 5388.13 00:11:12.817 PCIE (0000:00:12.0) NSID 1 from core 0: 5990.57 23.40 2670.01 859.34 5345.17 00:11:12.817 PCIE (0000:00:12.0) NSID 2 from core 0: 5990.57 23.40 2670.02 855.59 5897.06 00:11:12.817 PCIE (0000:00:12.0) NSID 3 from core 0: 5990.57 23.40 2670.02 860.71 6372.39 00:11:12.817 ======================================================== 00:11:12.817 Total : 35943.41 140.40 2669.79 832.06 6500.17 00:11:12.817 00:11:12.817 Initializing NVMe Controllers 00:11:12.817 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:12.817 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:12.817 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:12.817 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:12.817 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:11:12.817 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:11:12.817 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:11:12.817 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:11:12.817 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:11:12.817 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:11:12.817 Initialization complete. Launching workers. 00:11:12.817 ======================================================== 00:11:12.817 Latency(us) 00:11:12.817 Device Information : IOPS MiB/s Average min max 00:11:12.817 PCIE (0000:00:10.0) NSID 1 from core 1: 6308.33 24.64 2534.16 879.20 5052.61 00:11:12.817 PCIE (0000:00:11.0) NSID 1 from core 1: 6308.33 24.64 2535.56 892.68 5009.20 00:11:12.817 PCIE (0000:00:13.0) NSID 1 from core 1: 6308.33 24.64 2535.59 893.56 5105.66 00:11:12.817 PCIE (0000:00:12.0) NSID 1 from core 1: 6308.33 24.64 2535.59 884.74 5080.78 00:11:12.817 PCIE (0000:00:12.0) NSID 2 from core 1: 6308.33 24.64 2535.57 890.05 4960.72 00:11:12.817 PCIE (0000:00:12.0) NSID 3 from core 1: 6308.33 24.64 2535.63 889.00 4826.84 00:11:12.817 ======================================================== 00:11:12.817 Total : 37849.97 147.85 2535.35 879.20 5105.66 00:11:12.817 00:11:14.725 Initializing NVMe Controllers 00:11:14.725 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:14.725 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:14.725 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:14.725 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:14.725 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:11:14.725 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:11:14.725 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:11:14.725 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:11:14.725 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:11:14.725 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:11:14.725 Initialization complete. Launching workers. 00:11:14.725 ======================================================== 00:11:14.725 Latency(us) 00:11:14.725 Device Information : IOPS MiB/s Average min max 00:11:14.725 PCIE (0000:00:10.0) NSID 1 from core 2: 3615.39 14.12 4424.27 919.40 11178.28 00:11:14.725 PCIE (0000:00:11.0) NSID 1 from core 2: 3615.39 14.12 4425.35 917.01 10794.09 00:11:14.725 PCIE (0000:00:13.0) NSID 1 from core 2: 3615.39 14.12 4425.14 923.00 10974.10 00:11:14.725 PCIE (0000:00:12.0) NSID 1 from core 2: 3615.39 14.12 4424.81 888.57 10912.70 00:11:14.725 PCIE (0000:00:12.0) NSID 2 from core 2: 3615.39 14.12 4424.97 940.12 11525.37 00:11:14.725 PCIE (0000:00:12.0) NSID 3 from core 2: 3615.39 14.12 4425.14 783.17 11012.31 00:11:14.725 ======================================================== 00:11:14.725 Total : 21692.32 84.74 4424.95 783.17 11525.37 00:11:14.725 00:11:14.725 ************************************ 00:11:14.725 END TEST nvme_multi_secondary 00:11:14.725 ************************************ 00:11:14.725 18:29:14 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 81567 00:11:14.725 18:29:14 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 81568 00:11:14.725 00:11:14.725 real 0m10.483s 00:11:14.725 user 0m18.286s 00:11:14.725 sys 0m0.760s 00:11:14.725 18:29:14 nvme.nvme_multi_secondary -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:14.725 18:29:14 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:11:14.725 18:29:14 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:11:14.725 18:29:14 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:11:14.725 18:29:14 nvme -- common/autotest_common.sh@1085 -- # [[ -e /proc/80521 ]] 00:11:14.725 18:29:14 nvme -- common/autotest_common.sh@1086 -- # kill 80521 00:11:14.725 18:29:14 nvme -- common/autotest_common.sh@1087 -- # wait 80521 00:11:14.725 [2024-07-23 18:29:14.636191] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81445) is not found. Dropping the request. 00:11:14.725 [2024-07-23 18:29:14.636543] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81445) is not found. Dropping the request. 00:11:14.725 [2024-07-23 18:29:14.636716] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81445) is not found. Dropping the request. 00:11:14.725 [2024-07-23 18:29:14.636766] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81445) is not found. Dropping the request. 00:11:14.725 [2024-07-23 18:29:14.638056] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81445) is not found. Dropping the request. 00:11:14.725 [2024-07-23 18:29:14.638167] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81445) is not found. Dropping the request. 00:11:14.725 [2024-07-23 18:29:14.638213] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81445) is not found. Dropping the request. 00:11:14.725 [2024-07-23 18:29:14.638254] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81445) is not found. Dropping the request. 00:11:14.725 [2024-07-23 18:29:14.639390] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81445) is not found. Dropping the request. 00:11:14.725 [2024-07-23 18:29:14.639492] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81445) is not found. Dropping the request. 00:11:14.725 [2024-07-23 18:29:14.639532] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81445) is not found. Dropping the request. 00:11:14.725 [2024-07-23 18:29:14.639605] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81445) is not found. Dropping the request. 00:11:14.725 [2024-07-23 18:29:14.640690] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81445) is not found. Dropping the request. 00:11:14.725 [2024-07-23 18:29:14.640783] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81445) is not found. Dropping the request. 00:11:14.725 [2024-07-23 18:29:14.640821] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81445) is not found. Dropping the request. 00:11:14.725 [2024-07-23 18:29:14.640859] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81445) is not found. Dropping the request. 00:11:14.725 18:29:14 nvme -- common/autotest_common.sh@1089 -- # rm -f /var/run/spdk_stub0 00:11:14.725 18:29:14 nvme -- common/autotest_common.sh@1093 -- # echo 2 00:11:14.725 18:29:14 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:11:14.725 18:29:14 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:14.725 18:29:14 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:14.725 18:29:14 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:14.725 ************************************ 00:11:14.725 START TEST bdev_nvme_reset_stuck_adm_cmd 00:11:14.725 ************************************ 00:11:14.725 18:29:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:11:14.985 * Looking for test storage... 00:11:14.985 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:14.985 18:29:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:11:14.985 18:29:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:11:14.985 18:29:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:11:14.985 18:29:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:11:14.985 18:29:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:11:14.985 18:29:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:11:14.985 18:29:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1520 -- # bdfs=() 00:11:14.985 18:29:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1520 -- # local bdfs 00:11:14.985 18:29:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1521 -- # bdfs=($(get_nvme_bdfs)) 00:11:14.985 18:29:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1521 -- # get_nvme_bdfs 00:11:14.985 18:29:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:11:14.985 18:29:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:11:14.985 18:29:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:14.985 18:29:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:14.985 18:29:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:11:14.985 18:29:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1511 -- # (( 4 == 0 )) 00:11:14.985 18:29:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:14.985 18:29:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1523 -- # echo 0000:00:10.0 00:11:14.985 18:29:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:11:14.985 18:29:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:11:14.985 18:29:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=81717 00:11:14.986 18:29:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:11:14.986 18:29:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:14.986 18:29:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 81717 00:11:14.986 18:29:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@827 -- # '[' -z 81717 ']' 00:11:14.986 18:29:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:14.986 18:29:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:14.986 18:29:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:14.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:14.986 18:29:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:14.986 18:29:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:15.246 [2024-07-23 18:29:15.094663] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:11:15.246 [2024-07-23 18:29:15.094851] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81717 ] 00:11:15.246 [2024-07-23 18:29:15.256036] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:15.505 [2024-07-23 18:29:15.331243] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:15.505 [2024-07-23 18:29:15.331527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:15.505 [2024-07-23 18:29:15.331433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:15.505 [2024-07-23 18:29:15.331705] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:16.075 18:29:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:16.075 18:29:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@860 -- # return 0 00:11:16.075 18:29:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:11:16.075 18:29:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:16.075 18:29:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:16.075 nvme0n1 00:11:16.075 18:29:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:16.075 18:29:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:11:16.075 18:29:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_JO2jj.txt 00:11:16.075 18:29:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:11:16.075 18:29:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:16.075 18:29:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:16.075 true 00:11:16.075 18:29:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:16.075 18:29:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:11:16.075 18:29:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1721759355 00:11:16.075 18:29:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=81739 00:11:16.075 18:29:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:16.075 18:29:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:11:16.075 18:29:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:11:17.984 18:29:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:11:17.984 18:29:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.984 18:29:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:17.984 [2024-07-23 18:29:17.980196] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:11:17.984 [2024-07-23 18:29:17.980561] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:11:17.984 [2024-07-23 18:29:17.980606] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:17.984 [2024-07-23 18:29:17.980651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:17.984 [2024-07-23 18:29:17.982553] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:17.984 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 81739 00:11:17.984 18:29:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.984 18:29:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 81739 00:11:17.984 18:29:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 81739 00:11:17.984 18:29:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:11:17.984 18:29:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=3 00:11:17.984 18:29:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:11:17.984 18:29:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.984 18:29:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:17.984 18:29:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.984 18:29:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:11:18.244 18:29:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_JO2jj.txt 00:11:18.244 18:29:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:11:18.244 18:29:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:11:18.244 18:29:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:11:18.244 18:29:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:11:18.244 18:29:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:11:18.244 18:29:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:11:18.244 18:29:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:11:18.244 18:29:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:11:18.244 18:29:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:11:18.244 18:29:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:11:18.244 18:29:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:11:18.244 18:29:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:11:18.244 18:29:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:11:18.244 18:29:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:11:18.244 18:29:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:11:18.244 18:29:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:11:18.244 18:29:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:11:18.244 18:29:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:11:18.244 18:29:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:11:18.244 18:29:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_JO2jj.txt 00:11:18.244 18:29:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 81717 00:11:18.244 18:29:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@946 -- # '[' -z 81717 ']' 00:11:18.244 18:29:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@950 -- # kill -0 81717 00:11:18.244 18:29:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@951 -- # uname 00:11:18.244 18:29:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:18.244 18:29:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 81717 00:11:18.244 killing process with pid 81717 00:11:18.244 18:29:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:18.244 18:29:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:18.244 18:29:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # echo 'killing process with pid 81717' 00:11:18.244 18:29:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@965 -- # kill 81717 00:11:18.244 18:29:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@970 -- # wait 81717 00:11:18.812 18:29:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:11:18.812 18:29:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:11:18.812 00:11:18.812 real 0m3.994s 00:11:18.812 user 0m13.577s 00:11:18.813 sys 0m0.791s 00:11:18.813 18:29:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:18.813 ************************************ 00:11:18.813 END TEST bdev_nvme_reset_stuck_adm_cmd 00:11:18.813 ************************************ 00:11:18.813 18:29:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:18.813 18:29:18 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:11:18.813 18:29:18 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:11:18.813 18:29:18 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:18.813 18:29:18 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:18.813 18:29:18 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:18.813 ************************************ 00:11:18.813 START TEST nvme_fio 00:11:18.813 ************************************ 00:11:18.813 18:29:18 nvme.nvme_fio -- common/autotest_common.sh@1121 -- # nvme_fio_test 00:11:18.813 18:29:18 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:11:18.813 18:29:18 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:11:18.813 18:29:18 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:11:18.813 18:29:18 nvme.nvme_fio -- common/autotest_common.sh@1509 -- # bdfs=() 00:11:18.813 18:29:18 nvme.nvme_fio -- common/autotest_common.sh@1509 -- # local bdfs 00:11:18.813 18:29:18 nvme.nvme_fio -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:18.813 18:29:18 nvme.nvme_fio -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:18.813 18:29:18 nvme.nvme_fio -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:11:19.072 18:29:18 nvme.nvme_fio -- common/autotest_common.sh@1511 -- # (( 4 == 0 )) 00:11:19.072 18:29:18 nvme.nvme_fio -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:19.072 18:29:18 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:11:19.072 18:29:18 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:11:19.072 18:29:18 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:19.072 18:29:18 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:11:19.072 18:29:18 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:19.330 18:29:19 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:11:19.330 18:29:19 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:19.589 18:29:19 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:11:19.589 18:29:19 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:11:19.589 18:29:19 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:11:19.589 18:29:19 nvme.nvme_fio -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:11:19.589 18:29:19 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:19.589 18:29:19 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # local sanitizers 00:11:19.589 18:29:19 nvme.nvme_fio -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:19.589 18:29:19 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # shift 00:11:19.589 18:29:19 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local asan_lib= 00:11:19.589 18:29:19 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:11:19.589 18:29:19 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:19.589 18:29:19 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # grep libasan 00:11:19.589 18:29:19 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:11:19.589 18:29:19 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:19.589 18:29:19 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:19.589 18:29:19 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # break 00:11:19.589 18:29:19 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:19.589 18:29:19 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:11:19.589 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:19.589 fio-3.35 00:11:19.589 Starting 1 thread 00:11:26.156 00:11:26.156 test: (groupid=0, jobs=1): err= 0: pid=81873: Tue Jul 23 18:29:25 2024 00:11:26.156 read: IOPS=22.4k, BW=87.3MiB/s (91.6MB/s)(175MiB/2001msec) 00:11:26.156 slat (usec): min=4, max=113, avg= 5.93, stdev= 1.28 00:11:26.157 clat (usec): min=197, max=14983, avg=2857.69, stdev=430.28 00:11:26.157 lat (usec): min=203, max=15097, avg=2863.62, stdev=430.81 00:11:26.157 clat percentiles (usec): 00:11:26.157 | 1.00th=[ 2343], 5.00th=[ 2606], 10.00th=[ 2671], 20.00th=[ 2737], 00:11:26.157 | 30.00th=[ 2769], 40.00th=[ 2835], 50.00th=[ 2835], 60.00th=[ 2868], 00:11:26.157 | 70.00th=[ 2900], 80.00th=[ 2933], 90.00th=[ 2966], 95.00th=[ 2999], 00:11:26.157 | 99.00th=[ 3851], 99.50th=[ 5538], 99.90th=[ 8979], 99.95th=[11338], 00:11:26.157 | 99.99th=[14615] 00:11:26.157 bw ( KiB/s): min=86040, max=88840, per=98.25%, avg=87866.67, stdev=1583.08, samples=3 00:11:26.157 iops : min=21510, max=22208, avg=21966.00, stdev=395.16, samples=3 00:11:26.157 write: IOPS=22.2k, BW=86.7MiB/s (91.0MB/s)(174MiB/2001msec); 0 zone resets 00:11:26.157 slat (usec): min=4, max=184, avg= 6.06, stdev= 1.50 00:11:26.157 clat (usec): min=207, max=14747, avg=2862.84, stdev=438.79 00:11:26.157 lat (usec): min=212, max=14772, avg=2868.90, stdev=439.32 00:11:26.157 clat percentiles (usec): 00:11:26.157 | 1.00th=[ 2376], 5.00th=[ 2606], 10.00th=[ 2671], 20.00th=[ 2737], 00:11:26.157 | 30.00th=[ 2802], 40.00th=[ 2835], 50.00th=[ 2835], 60.00th=[ 2868], 00:11:26.157 | 70.00th=[ 2900], 80.00th=[ 2933], 90.00th=[ 2966], 95.00th=[ 3032], 00:11:26.157 | 99.00th=[ 3818], 99.50th=[ 5669], 99.90th=[ 8979], 99.95th=[11863], 00:11:26.157 | 99.99th=[14222] 00:11:26.157 bw ( KiB/s): min=85712, max=89648, per=99.14%, avg=88061.33, stdev=2075.88, samples=3 00:11:26.157 iops : min=21428, max=22412, avg=22015.33, stdev=518.97, samples=3 00:11:26.157 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:11:26.157 lat (msec) : 2=0.57%, 4=98.52%, 10=0.79%, 20=0.07% 00:11:26.157 cpu : usr=99.35%, sys=0.00%, ctx=10, majf=0, minf=626 00:11:26.157 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:26.157 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:26.157 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:26.157 issued rwts: total=44738,44434,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:26.157 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:26.157 00:11:26.157 Run status group 0 (all jobs): 00:11:26.157 READ: bw=87.3MiB/s (91.6MB/s), 87.3MiB/s-87.3MiB/s (91.6MB/s-91.6MB/s), io=175MiB (183MB), run=2001-2001msec 00:11:26.157 WRITE: bw=86.7MiB/s (91.0MB/s), 86.7MiB/s-86.7MiB/s (91.0MB/s-91.0MB/s), io=174MiB (182MB), run=2001-2001msec 00:11:26.157 ----------------------------------------------------- 00:11:26.157 Suppressions used: 00:11:26.157 count bytes template 00:11:26.157 1 32 /usr/src/fio/parse.c 00:11:26.157 1 8 libtcmalloc_minimal.so 00:11:26.157 ----------------------------------------------------- 00:11:26.157 00:11:26.157 18:29:25 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:26.157 18:29:25 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:26.157 18:29:25 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:11:26.157 18:29:25 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:26.157 18:29:25 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:11:26.157 18:29:25 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:26.157 18:29:26 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:11:26.157 18:29:26 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:11:26.157 18:29:26 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:11:26.157 18:29:26 nvme.nvme_fio -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:11:26.157 18:29:26 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:26.157 18:29:26 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # local sanitizers 00:11:26.157 18:29:26 nvme.nvme_fio -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:26.157 18:29:26 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # shift 00:11:26.157 18:29:26 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local asan_lib= 00:11:26.157 18:29:26 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:11:26.157 18:29:26 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:26.157 18:29:26 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # grep libasan 00:11:26.157 18:29:26 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:11:26.157 18:29:26 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:26.157 18:29:26 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:26.157 18:29:26 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # break 00:11:26.157 18:29:26 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:26.157 18:29:26 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:11:26.417 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:26.417 fio-3.35 00:11:26.417 Starting 1 thread 00:11:32.985 00:11:32.985 test: (groupid=0, jobs=1): err= 0: pid=81968: Tue Jul 23 18:29:32 2024 00:11:32.985 read: IOPS=22.3k, BW=87.1MiB/s (91.4MB/s)(174MiB/2001msec) 00:11:32.985 slat (usec): min=4, max=178, avg= 6.05, stdev= 1.60 00:11:32.985 clat (usec): min=221, max=14953, avg=2865.54, stdev=495.16 00:11:32.985 lat (usec): min=227, max=15030, avg=2871.60, stdev=496.01 00:11:32.985 clat percentiles (usec): 00:11:32.985 | 1.00th=[ 2507], 5.00th=[ 2606], 10.00th=[ 2671], 20.00th=[ 2737], 00:11:32.985 | 30.00th=[ 2769], 40.00th=[ 2802], 50.00th=[ 2835], 60.00th=[ 2835], 00:11:32.985 | 70.00th=[ 2868], 80.00th=[ 2900], 90.00th=[ 2933], 95.00th=[ 2999], 00:11:32.985 | 99.00th=[ 4621], 99.50th=[ 6652], 99.90th=[ 9241], 99.95th=[10552], 00:11:32.985 | 99.99th=[14484] 00:11:32.985 bw ( KiB/s): min=85640, max=89288, per=98.64%, avg=88010.67, stdev=2055.12, samples=3 00:11:32.985 iops : min=21410, max=22322, avg=22002.67, stdev=513.78, samples=3 00:11:32.985 write: IOPS=22.2k, BW=86.6MiB/s (90.8MB/s)(173MiB/2001msec); 0 zone resets 00:11:32.985 slat (nsec): min=4959, max=59721, avg=6185.97, stdev=1360.44 00:11:32.985 clat (usec): min=199, max=14656, avg=2867.78, stdev=496.49 00:11:32.985 lat (usec): min=205, max=14674, avg=2873.97, stdev=497.32 00:11:32.985 clat percentiles (usec): 00:11:32.985 | 1.00th=[ 2507], 5.00th=[ 2638], 10.00th=[ 2671], 20.00th=[ 2737], 00:11:32.985 | 30.00th=[ 2769], 40.00th=[ 2802], 50.00th=[ 2835], 60.00th=[ 2835], 00:11:32.985 | 70.00th=[ 2868], 80.00th=[ 2900], 90.00th=[ 2933], 95.00th=[ 2999], 00:11:32.985 | 99.00th=[ 4686], 99.50th=[ 6652], 99.90th=[ 9241], 99.95th=[11076], 00:11:32.985 | 99.99th=[13960] 00:11:32.985 bw ( KiB/s): min=85744, max=90248, per=99.51%, avg=88213.33, stdev=2283.24, samples=3 00:11:32.985 iops : min=21436, max=22562, avg=22053.33, stdev=570.81, samples=3 00:11:32.985 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:11:32.985 lat (msec) : 2=0.36%, 4=97.99%, 10=1.55%, 20=0.06% 00:11:32.985 cpu : usr=99.40%, sys=0.05%, ctx=3, majf=0, minf=626 00:11:32.985 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:32.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:32.985 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:32.985 issued rwts: total=44635,44344,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:32.985 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:32.985 00:11:32.985 Run status group 0 (all jobs): 00:11:32.985 READ: bw=87.1MiB/s (91.4MB/s), 87.1MiB/s-87.1MiB/s (91.4MB/s-91.4MB/s), io=174MiB (183MB), run=2001-2001msec 00:11:32.985 WRITE: bw=86.6MiB/s (90.8MB/s), 86.6MiB/s-86.6MiB/s (90.8MB/s-90.8MB/s), io=173MiB (182MB), run=2001-2001msec 00:11:32.985 ----------------------------------------------------- 00:11:32.985 Suppressions used: 00:11:32.985 count bytes template 00:11:32.985 1 32 /usr/src/fio/parse.c 00:11:32.985 1 8 libtcmalloc_minimal.so 00:11:32.985 ----------------------------------------------------- 00:11:32.985 00:11:32.985 18:29:32 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:32.985 18:29:32 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:32.985 18:29:32 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:11:32.985 18:29:32 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:32.985 18:29:32 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:32.985 18:29:32 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:11:32.985 18:29:32 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:11:32.985 18:29:32 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:11:32.985 18:29:32 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:11:32.985 18:29:32 nvme.nvme_fio -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:11:32.985 18:29:32 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:32.985 18:29:32 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # local sanitizers 00:11:32.985 18:29:32 nvme.nvme_fio -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:32.985 18:29:32 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # shift 00:11:32.985 18:29:32 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local asan_lib= 00:11:32.985 18:29:32 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:11:32.985 18:29:32 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:32.985 18:29:32 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # grep libasan 00:11:32.985 18:29:32 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:11:32.985 18:29:32 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:32.985 18:29:32 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:32.985 18:29:32 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # break 00:11:32.985 18:29:32 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:32.985 18:29:32 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:11:33.244 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:33.244 fio-3.35 00:11:33.244 Starting 1 thread 00:11:41.369 00:11:41.369 test: (groupid=0, jobs=1): err= 0: pid=82056: Tue Jul 23 18:29:40 2024 00:11:41.369 read: IOPS=25.6k, BW=100MiB/s (105MB/s)(200MiB/2001msec) 00:11:41.369 slat (nsec): min=3696, max=72679, avg=4314.59, stdev=1110.36 00:11:41.369 clat (usec): min=176, max=13704, avg=2485.85, stdev=360.87 00:11:41.369 lat (usec): min=180, max=13776, avg=2490.17, stdev=361.45 00:11:41.369 clat percentiles (usec): 00:11:41.369 | 1.00th=[ 2180], 5.00th=[ 2311], 10.00th=[ 2343], 20.00th=[ 2376], 00:11:41.369 | 30.00th=[ 2409], 40.00th=[ 2442], 50.00th=[ 2474], 60.00th=[ 2474], 00:11:41.369 | 70.00th=[ 2507], 80.00th=[ 2540], 90.00th=[ 2606], 95.00th=[ 2638], 00:11:41.369 | 99.00th=[ 3195], 99.50th=[ 4293], 99.90th=[ 7570], 99.95th=[ 9765], 00:11:41.369 | 99.99th=[13042] 00:11:41.369 bw ( KiB/s): min=98099, max=104232, per=99.50%, avg=101995.67, stdev=3386.88, samples=3 00:11:41.369 iops : min=24524, max=26058, avg=25498.67, stdev=847.15, samples=3 00:11:41.369 write: IOPS=25.5k, BW=99.7MiB/s (105MB/s)(200MiB/2001msec); 0 zone resets 00:11:41.369 slat (nsec): min=3996, max=51564, avg=4905.02, stdev=1198.40 00:11:41.369 clat (usec): min=167, max=13467, avg=2499.57, stdev=369.82 00:11:41.369 lat (usec): min=172, max=13484, avg=2504.47, stdev=370.41 00:11:41.369 clat percentiles (usec): 00:11:41.369 | 1.00th=[ 2180], 5.00th=[ 2311], 10.00th=[ 2343], 20.00th=[ 2409], 00:11:41.369 | 30.00th=[ 2442], 40.00th=[ 2442], 50.00th=[ 2474], 60.00th=[ 2507], 00:11:41.369 | 70.00th=[ 2540], 80.00th=[ 2540], 90.00th=[ 2606], 95.00th=[ 2671], 00:11:41.369 | 99.00th=[ 3228], 99.50th=[ 4359], 99.90th=[ 7767], 99.95th=[10159], 00:11:41.369 | 99.99th=[12780] 00:11:41.369 bw ( KiB/s): min=97780, max=104224, per=99.78%, avg=101908.00, stdev=3583.82, samples=3 00:11:41.369 iops : min=24445, max=26056, avg=25477.00, stdev=895.96, samples=3 00:11:41.369 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.01% 00:11:41.369 lat (msec) : 2=0.62%, 4=98.75%, 10=0.53%, 20=0.05% 00:11:41.369 cpu : usr=99.45%, sys=0.00%, ctx=3, majf=0, minf=628 00:11:41.369 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:41.369 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:41.369 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:41.369 issued rwts: total=51280,51093,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:41.369 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:41.369 00:11:41.369 Run status group 0 (all jobs): 00:11:41.369 READ: bw=100MiB/s (105MB/s), 100MiB/s-100MiB/s (105MB/s-105MB/s), io=200MiB (210MB), run=2001-2001msec 00:11:41.369 WRITE: bw=99.7MiB/s (105MB/s), 99.7MiB/s-99.7MiB/s (105MB/s-105MB/s), io=200MiB (209MB), run=2001-2001msec 00:11:41.369 ----------------------------------------------------- 00:11:41.369 Suppressions used: 00:11:41.369 count bytes template 00:11:41.369 1 32 /usr/src/fio/parse.c 00:11:41.369 1 8 libtcmalloc_minimal.so 00:11:41.369 ----------------------------------------------------- 00:11:41.369 00:11:41.369 18:29:40 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:41.369 18:29:40 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:41.369 18:29:40 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:11:41.369 18:29:40 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:41.369 18:29:41 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:11:41.369 18:29:41 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:41.369 18:29:41 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:11:41.369 18:29:41 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:11:41.369 18:29:41 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:11:41.369 18:29:41 nvme.nvme_fio -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:11:41.369 18:29:41 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:41.369 18:29:41 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # local sanitizers 00:11:41.369 18:29:41 nvme.nvme_fio -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:41.369 18:29:41 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # shift 00:11:41.369 18:29:41 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local asan_lib= 00:11:41.369 18:29:41 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:11:41.369 18:29:41 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:11:41.369 18:29:41 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:41.369 18:29:41 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # grep libasan 00:11:41.369 18:29:41 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:41.369 18:29:41 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:41.370 18:29:41 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # break 00:11:41.370 18:29:41 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:41.370 18:29:41 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:11:41.629 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:41.629 fio-3.35 00:11:41.629 Starting 1 thread 00:11:49.758 00:11:49.758 test: (groupid=0, jobs=1): err= 0: pid=82172: Tue Jul 23 18:29:49 2024 00:11:49.758 read: IOPS=26.5k, BW=104MiB/s (109MB/s)(207MiB/2001msec) 00:11:49.758 slat (nsec): min=3693, max=25648, avg=4102.42, stdev=751.51 00:11:49.758 clat (usec): min=189, max=4963, avg=2403.27, stdev=151.98 00:11:49.758 lat (usec): min=193, max=4988, avg=2407.37, stdev=152.03 00:11:49.758 clat percentiles (usec): 00:11:49.758 | 1.00th=[ 2114], 5.00th=[ 2245], 10.00th=[ 2278], 20.00th=[ 2311], 00:11:49.758 | 30.00th=[ 2343], 40.00th=[ 2376], 50.00th=[ 2409], 60.00th=[ 2409], 00:11:49.758 | 70.00th=[ 2442], 80.00th=[ 2474], 90.00th=[ 2507], 95.00th=[ 2573], 00:11:49.758 | 99.00th=[ 2704], 99.50th=[ 2900], 99.90th=[ 4293], 99.95th=[ 4555], 00:11:49.758 | 99.99th=[ 4883] 00:11:49.758 bw ( KiB/s): min=104896, max=106760, per=99.70%, avg=105842.67, stdev=932.35, samples=3 00:11:49.758 iops : min=26224, max=26690, avg=26460.67, stdev=233.09, samples=3 00:11:49.758 write: IOPS=26.4k, BW=103MiB/s (108MB/s)(207MiB/2001msec); 0 zone resets 00:11:49.758 slat (nsec): min=3797, max=25956, avg=4695.59, stdev=813.84 00:11:49.758 clat (usec): min=182, max=4923, avg=2413.90, stdev=157.11 00:11:49.758 lat (usec): min=187, max=4929, avg=2418.60, stdev=157.15 00:11:49.758 clat percentiles (usec): 00:11:49.758 | 1.00th=[ 2114], 5.00th=[ 2245], 10.00th=[ 2278], 20.00th=[ 2343], 00:11:49.758 | 30.00th=[ 2376], 40.00th=[ 2376], 50.00th=[ 2409], 60.00th=[ 2442], 00:11:49.758 | 70.00th=[ 2474], 80.00th=[ 2474], 90.00th=[ 2540], 95.00th=[ 2573], 00:11:49.758 | 99.00th=[ 2704], 99.50th=[ 2966], 99.90th=[ 4359], 99.95th=[ 4555], 00:11:49.758 | 99.99th=[ 4817] 00:11:49.758 bw ( KiB/s): min=105056, max=105848, per=99.76%, avg=105541.33, stdev=425.16, samples=3 00:11:49.758 iops : min=26264, max=26462, avg=26385.33, stdev=106.29, samples=3 00:11:49.758 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.02% 00:11:49.759 lat (msec) : 2=0.49%, 4=99.29%, 10=0.17% 00:11:49.759 cpu : usr=99.50%, sys=0.00%, ctx=4, majf=0, minf=623 00:11:49.759 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:49.759 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:49.759 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:49.759 issued rwts: total=53106,52922,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:49.759 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:49.759 00:11:49.759 Run status group 0 (all jobs): 00:11:49.759 READ: bw=104MiB/s (109MB/s), 104MiB/s-104MiB/s (109MB/s-109MB/s), io=207MiB (218MB), run=2001-2001msec 00:11:49.759 WRITE: bw=103MiB/s (108MB/s), 103MiB/s-103MiB/s (108MB/s-108MB/s), io=207MiB (217MB), run=2001-2001msec 00:11:49.759 ----------------------------------------------------- 00:11:49.759 Suppressions used: 00:11:49.759 count bytes template 00:11:49.759 1 32 /usr/src/fio/parse.c 00:11:49.759 1 8 libtcmalloc_minimal.so 00:11:49.759 ----------------------------------------------------- 00:11:49.759 00:11:49.759 18:29:49 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:49.759 18:29:49 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:11:49.759 00:11:49.759 real 0m30.476s 00:11:49.759 user 0m16.795s 00:11:49.759 sys 0m25.941s 00:11:49.759 ************************************ 00:11:49.759 END TEST nvme_fio 00:11:49.759 ************************************ 00:11:49.759 18:29:49 nvme.nvme_fio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:49.759 18:29:49 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:11:49.759 ************************************ 00:11:49.759 END TEST nvme 00:11:49.759 ************************************ 00:11:49.759 00:11:49.759 real 1m40.735s 00:11:49.759 user 3m38.568s 00:11:49.759 sys 0m36.194s 00:11:49.759 18:29:49 nvme -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:49.759 18:29:49 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:49.759 18:29:49 -- spdk/autotest.sh@217 -- # [[ 0 -eq 1 ]] 00:11:49.759 18:29:49 -- spdk/autotest.sh@221 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:11:49.759 18:29:49 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:49.759 18:29:49 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:49.759 18:29:49 -- common/autotest_common.sh@10 -- # set +x 00:11:49.759 ************************************ 00:11:49.759 START TEST nvme_scc 00:11:49.759 ************************************ 00:11:49.759 18:29:49 nvme_scc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:11:49.759 * Looking for test storage... 00:11:49.759 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:49.759 18:29:49 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:49.759 18:29:49 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:49.759 18:29:49 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:11:49.759 18:29:49 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:11:49.759 18:29:49 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:49.759 18:29:49 nvme_scc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:49.759 18:29:49 nvme_scc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:49.759 18:29:49 nvme_scc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:49.759 18:29:49 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.759 18:29:49 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.759 18:29:49 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.759 18:29:49 nvme_scc -- paths/export.sh@5 -- # export PATH 00:11:49.759 18:29:49 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.759 18:29:49 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:11:49.759 18:29:49 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:11:49.759 18:29:49 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:11:49.759 18:29:49 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:11:49.759 18:29:49 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:11:49.759 18:29:49 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:11:49.759 18:29:49 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:11:49.759 18:29:49 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:11:49.759 18:29:49 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:11:49.759 18:29:49 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:49.759 18:29:49 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:11:49.759 18:29:49 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:11:49.759 18:29:49 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:11:49.759 18:29:49 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:50.019 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:50.279 Waiting for block devices as requested 00:11:50.540 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:50.540 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:50.540 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:50.800 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:56.088 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:56.088 18:29:55 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:11:56.088 18:29:55 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:11:56.088 18:29:55 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:56.088 18:29:55 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:11:56.088 18:29:55 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:11:56.088 18:29:55 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:11:56.088 18:29:55 nvme_scc -- scripts/common.sh@15 -- # local i 00:11:56.088 18:29:55 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:11:56.088 18:29:55 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:56.088 18:29:55 nvme_scc -- scripts/common.sh@24 -- # return 0 00:11:56.088 18:29:55 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:11:56.088 18:29:55 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:11:56.088 18:29:55 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:11:56.088 18:29:55 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:56.088 18:29:55 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:11:56.088 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.088 18:29:55 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:11:56.088 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.088 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:56.088 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.088 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.088 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:11:56.089 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:11:56.090 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.091 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.092 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:11:56.093 18:29:55 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:11:56.093 18:29:55 nvme_scc -- scripts/common.sh@15 -- # local i 00:11:56.093 18:29:55 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:11:56.094 18:29:55 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:56.094 18:29:55 nvme_scc -- scripts/common.sh@24 -- # return 0 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:11:56.094 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:11:56.095 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.096 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:11:56.097 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:11:56.098 18:29:55 nvme_scc -- scripts/common.sh@15 -- # local i 00:11:56.098 18:29:55 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:11:56.098 18:29:55 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:56.098 18:29:55 nvme_scc -- scripts/common.sh@24 -- # return 0 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.098 18:29:55 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:56.099 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.100 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.101 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.102 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:11:56.103 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:11:56.104 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:11:56.105 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.105 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.105 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:56.105 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:11:56.105 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:11:56.105 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.105 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.105 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.105 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:11:56.105 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:11:56.105 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.105 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.105 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.105 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:11:56.105 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:11:56.105 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.105 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.105 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.105 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:11:56.105 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:11:56.105 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.105 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.105 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.105 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:11:56.105 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:11:56.105 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.105 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.105 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.105 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:11:56.105 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:11:56.105 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.105 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.105 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:56.105 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:11:56.105 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:11:56.105 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.105 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.105 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:56.105 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:11:56.105 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:11:56.105 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.105 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.105 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:56.105 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:56.105 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:56.105 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.105 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.105 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:56.105 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:56.105 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:56.105 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.105 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.105 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:56.105 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:56.105 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:56.105 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.105 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.105 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:56.105 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:56.105 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:56.105 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.105 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.105 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:56.105 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:56.105 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:56.105 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.105 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.105 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:56.105 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:56.105 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:56.105 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.105 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.105 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:56.105 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:56.105 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:56.105 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.105 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.105 18:29:55 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:56.105 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:56.105 18:29:55 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:56.105 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.105 18:29:55 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.105 18:29:55 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:11:56.105 18:29:55 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:56.105 18:29:55 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:11:56.105 18:29:55 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:11:56.105 18:29:55 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:11:56.105 18:29:56 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:11:56.105 18:29:56 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:56.105 18:29:56 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:11:56.105 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.105 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.105 18:29:56 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:11:56.105 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:56.105 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.105 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.105 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:56.105 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:11:56.105 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:11:56.105 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.105 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.105 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:56.105 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:11:56.105 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:11:56.105 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.105 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.105 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:56.105 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:11:56.105 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:11:56.105 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.105 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.105 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:56.105 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:11:56.105 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:11:56.105 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.105 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.105 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:56.105 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:11:56.105 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:11:56.105 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.105 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.105 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:56.105 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:11:56.105 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:11:56.105 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.105 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.105 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:56.105 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:11:56.105 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:11:56.105 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.105 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.105 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:56.105 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:11:56.105 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:11:56.105 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.105 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.105 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.105 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:11:56.105 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:11:56.105 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.105 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.105 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.105 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:56.106 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:11:56.107 18:29:56 nvme_scc -- scripts/common.sh@15 -- # local i 00:11:56.107 18:29:56 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:11:56.107 18:29:56 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:56.107 18:29:56 nvme_scc -- scripts/common.sh@24 -- # return 0 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:56.107 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.108 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:11:56.109 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:11:56.110 18:29:56 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@194 -- # [[ function == function ]] 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme1 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme1 oncs 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme1 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme1 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme1 oncs 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@197 -- # echo nvme1 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme0 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@197 -- # echo nvme0 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme3 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme3 oncs 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme3 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme3 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme3 oncs 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@197 -- # echo nvme3 00:11:56.110 18:29:56 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:56.111 18:29:56 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme2 00:11:56.111 18:29:56 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme2 oncs 00:11:56.111 18:29:56 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme2 00:11:56.111 18:29:56 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme2 00:11:56.111 18:29:56 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme2 oncs 00:11:56.111 18:29:56 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:11:56.111 18:29:56 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:11:56.111 18:29:56 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:11:56.111 18:29:56 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:56.111 18:29:56 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:56.111 18:29:56 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:56.111 18:29:56 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:56.111 18:29:56 nvme_scc -- nvme/functions.sh@197 -- # echo nvme2 00:11:56.111 18:29:56 nvme_scc -- nvme/functions.sh@205 -- # (( 4 > 0 )) 00:11:56.111 18:29:56 nvme_scc -- nvme/functions.sh@206 -- # echo nvme1 00:11:56.111 18:29:56 nvme_scc -- nvme/functions.sh@207 -- # return 0 00:11:56.111 18:29:56 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:11:56.111 18:29:56 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:11:56.111 18:29:56 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:57.050 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:57.620 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:57.620 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:57.620 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:57.620 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:57.620 18:29:57 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:11:57.620 18:29:57 nvme_scc -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:11:57.620 18:29:57 nvme_scc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:57.620 18:29:57 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:11:57.620 ************************************ 00:11:57.620 START TEST nvme_simple_copy 00:11:57.620 ************************************ 00:11:57.620 18:29:57 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:11:57.879 Initializing NVMe Controllers 00:11:57.879 Attaching to 0000:00:10.0 00:11:57.879 Controller supports SCC. Attached to 0000:00:10.0 00:11:57.879 Namespace ID: 1 size: 6GB 00:11:57.879 Initialization complete. 00:11:57.879 00:11:57.879 Controller QEMU NVMe Ctrl (12340 ) 00:11:57.879 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:11:57.879 Namespace Block Size:4096 00:11:57.879 Writing LBAs 0 to 63 with Random Data 00:11:57.879 Copied LBAs from 0 - 63 to the Destination LBA 256 00:11:57.879 LBAs matching Written Data: 64 00:11:57.879 00:11:57.879 real 0m0.252s 00:11:57.879 user 0m0.085s 00:11:57.879 sys 0m0.068s 00:11:57.879 18:29:57 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:57.879 18:29:57 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:11:57.879 ************************************ 00:11:57.879 END TEST nvme_simple_copy 00:11:57.879 ************************************ 00:11:57.879 ************************************ 00:11:57.879 END TEST nvme_scc 00:11:57.879 ************************************ 00:11:57.879 00:11:57.879 real 0m8.501s 00:11:57.879 user 0m1.312s 00:11:57.879 sys 0m2.203s 00:11:57.879 18:29:57 nvme_scc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:57.879 18:29:57 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:11:58.138 18:29:57 -- spdk/autotest.sh@223 -- # [[ 0 -eq 1 ]] 00:11:58.138 18:29:57 -- spdk/autotest.sh@226 -- # [[ 0 -eq 1 ]] 00:11:58.138 18:29:57 -- spdk/autotest.sh@229 -- # [[ '' -eq 1 ]] 00:11:58.138 18:29:57 -- spdk/autotest.sh@232 -- # [[ 1 -eq 1 ]] 00:11:58.138 18:29:57 -- spdk/autotest.sh@233 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:11:58.138 18:29:57 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:58.138 18:29:57 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:58.138 18:29:57 -- common/autotest_common.sh@10 -- # set +x 00:11:58.138 ************************************ 00:11:58.138 START TEST nvme_fdp 00:11:58.138 ************************************ 00:11:58.138 18:29:57 nvme_fdp -- common/autotest_common.sh@1121 -- # test/nvme/nvme_fdp.sh 00:11:58.138 * Looking for test storage... 00:11:58.138 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:58.138 18:29:58 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:58.138 18:29:58 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:58.138 18:29:58 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:11:58.138 18:29:58 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:11:58.138 18:29:58 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:58.138 18:29:58 nvme_fdp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:58.138 18:29:58 nvme_fdp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:58.138 18:29:58 nvme_fdp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:58.138 18:29:58 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.138 18:29:58 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.139 18:29:58 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.139 18:29:58 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:11:58.139 18:29:58 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.139 18:29:58 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:11:58.139 18:29:58 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:11:58.139 18:29:58 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:11:58.139 18:29:58 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:11:58.139 18:29:58 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:11:58.139 18:29:58 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:11:58.139 18:29:58 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:11:58.139 18:29:58 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:11:58.139 18:29:58 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:11:58.139 18:29:58 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:58.139 18:29:58 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:58.707 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:58.967 Waiting for block devices as requested 00:11:58.967 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:58.967 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:59.227 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:59.227 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:04.515 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:04.515 18:30:04 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:12:04.515 18:30:04 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:12:04.515 18:30:04 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:04.515 18:30:04 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:12:04.515 18:30:04 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:12:04.515 18:30:04 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:12:04.515 18:30:04 nvme_fdp -- scripts/common.sh@15 -- # local i 00:12:04.515 18:30:04 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:12:04.515 18:30:04 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:04.515 18:30:04 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:12:04.515 18:30:04 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:12:04.515 18:30:04 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:12:04.515 18:30:04 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:12:04.515 18:30:04 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:04.515 18:30:04 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:12:04.515 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.515 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.515 18:30:04 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:12:04.515 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:04.515 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.515 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.515 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:04.515 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:12:04.515 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:12:04.515 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.515 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.515 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:04.515 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:12:04.515 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:12:04.515 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.515 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.515 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:12:04.515 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:12:04.515 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:12:04.515 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.515 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.515 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:04.515 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:12:04.515 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:12:04.515 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.515 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.515 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:04.515 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:12:04.515 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:12:04.515 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.515 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.515 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:04.515 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:12:04.515 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:12:04.515 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.515 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.515 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:04.515 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:12:04.515 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:12:04.515 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.515 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.515 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.515 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:12:04.515 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:12:04.515 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.515 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.515 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:04.515 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:12:04.515 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:12:04.515 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.515 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.515 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.515 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:12:04.515 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:12:04.515 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.515 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.515 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:04.515 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:12:04.515 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:12:04.515 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.515 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.515 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.515 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:12:04.515 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:12:04.515 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.515 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.515 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.515 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:12:04.515 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:12:04.515 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.515 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.515 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:04.515 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:12:04.515 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:12:04.515 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.515 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.515 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:04.515 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.516 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:12:04.517 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.518 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.519 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:12:04.520 18:30:04 nvme_fdp -- scripts/common.sh@15 -- # local i 00:12:04.520 18:30:04 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:12:04.520 18:30:04 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:04.520 18:30:04 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.520 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.521 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.522 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.523 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:12:04.524 18:30:04 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:12:04.525 18:30:04 nvme_fdp -- scripts/common.sh@15 -- # local i 00:12:04.525 18:30:04 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:12:04.525 18:30:04 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:04.525 18:30:04 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.525 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.526 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.527 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.528 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.529 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:12:04.530 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:04.531 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:04.532 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:12:04.533 18:30:04 nvme_fdp -- scripts/common.sh@15 -- # local i 00:12:04.533 18:30:04 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:12:04.533 18:30:04 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:04.533 18:30:04 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.533 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:12:04.534 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:12:04.535 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:12:04.536 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.536 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.536 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:04.536 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:12:04.536 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:12:04.536 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.536 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.536 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:04.536 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:12:04.536 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:12:04.536 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.536 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.536 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.536 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:12:04.536 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:12:04.536 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.536 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.536 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.536 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:12:04.536 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:12:04.536 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.536 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.536 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.536 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:12:04.536 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:12:04.536 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.536 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.536 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:12:04.536 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:12:04.536 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:12:04.536 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.536 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.536 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.536 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:12:04.536 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:12:04.536 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.536 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.536 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.536 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:12:04.536 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:12:04.536 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.536 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.536 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.536 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:12:04.536 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:12:04.536 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.536 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.536 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.536 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:12:04.536 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:12:04.536 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.536 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.536 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.536 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:12:04.536 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:12:04.536 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.536 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.536 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:04.536 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:12:04.536 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:12:04.536 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.536 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.536 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:04.536 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:04.536 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:04.536 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.536 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.536 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:04.536 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:04.536 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:04.536 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.536 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.536 18:30:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:04.536 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:12:04.536 18:30:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:12:04.536 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:04.536 18:30:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:04.536 18:30:04 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:12:04.536 18:30:04 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:12:04.536 18:30:04 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:12:04.536 18:30:04 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:12:04.536 18:30:04 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:12:04.536 18:30:04 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:12:04.796 18:30:04 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:12:04.796 18:30:04 nvme_fdp -- nvme/functions.sh@202 -- # local _ctrls feature=fdp 00:12:04.796 18:30:04 nvme_fdp -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:12:04.796 18:30:04 nvme_fdp -- nvme/functions.sh@204 -- # get_ctrls_with_feature fdp 00:12:04.796 18:30:04 nvme_fdp -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:12:04.796 18:30:04 nvme_fdp -- nvme/functions.sh@192 -- # local ctrl feature=fdp 00:12:04.796 18:30:04 nvme_fdp -- nvme/functions.sh@194 -- # type -t ctrl_has_fdp 00:12:04.796 18:30:04 nvme_fdp -- nvme/functions.sh@194 -- # [[ function == function ]] 00:12:04.796 18:30:04 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:12:04.796 18:30:04 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme1 00:12:04.796 18:30:04 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme1 ctratt 00:12:04.796 18:30:04 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme1 00:12:04.796 18:30:04 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme1 00:12:04.796 18:30:04 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme1 ctratt 00:12:04.796 18:30:04 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:12:04.796 18:30:04 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:12:04.796 18:30:04 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:12:04.796 18:30:04 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:12:04.796 18:30:04 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:12:04.796 18:30:04 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:12:04.796 18:30:04 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:12:04.796 18:30:04 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:12:04.796 18:30:04 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme0 00:12:04.796 18:30:04 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme0 ctratt 00:12:04.796 18:30:04 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme0 00:12:04.796 18:30:04 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme0 00:12:04.796 18:30:04 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme0 ctratt 00:12:04.796 18:30:04 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:12:04.796 18:30:04 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:12:04.796 18:30:04 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:12:04.796 18:30:04 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:12:04.796 18:30:04 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:12:04.796 18:30:04 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:12:04.796 18:30:04 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:12:04.796 18:30:04 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:12:04.796 18:30:04 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme3 00:12:04.796 18:30:04 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme3 ctratt 00:12:04.796 18:30:04 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme3 00:12:04.796 18:30:04 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme3 00:12:04.796 18:30:04 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme3 ctratt 00:12:04.796 18:30:04 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:12:04.796 18:30:04 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:12:04.796 18:30:04 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:12:04.796 18:30:04 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:12:04.796 18:30:04 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:12:04.796 18:30:04 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x88010 00:12:04.796 18:30:04 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:12:04.796 18:30:04 nvme_fdp -- nvme/functions.sh@197 -- # echo nvme3 00:12:04.796 18:30:04 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:12:04.796 18:30:04 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme2 00:12:04.796 18:30:04 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme2 ctratt 00:12:04.796 18:30:04 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme2 00:12:04.796 18:30:04 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme2 00:12:04.796 18:30:04 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme2 ctratt 00:12:04.796 18:30:04 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:12:04.796 18:30:04 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:12:04.796 18:30:04 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:12:04.796 18:30:04 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:12:04.796 18:30:04 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:12:04.796 18:30:04 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:12:04.797 18:30:04 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:12:04.797 18:30:04 nvme_fdp -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:12:04.797 18:30:04 nvme_fdp -- nvme/functions.sh@206 -- # echo nvme3 00:12:04.797 18:30:04 nvme_fdp -- nvme/functions.sh@207 -- # return 0 00:12:04.797 18:30:04 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:12:04.797 18:30:04 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:12:04.797 18:30:04 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:05.367 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:05.936 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:05.936 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:12:05.936 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:12:05.936 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:06.195 18:30:06 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:12:06.195 18:30:06 nvme_fdp -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:12:06.195 18:30:06 nvme_fdp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:06.195 18:30:06 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:12:06.195 ************************************ 00:12:06.195 START TEST nvme_flexible_data_placement 00:12:06.196 ************************************ 00:12:06.196 18:30:06 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:12:06.455 Initializing NVMe Controllers 00:12:06.455 Attaching to 0000:00:13.0 00:12:06.455 Controller supports FDP Attached to 0000:00:13.0 00:12:06.455 Namespace ID: 1 Endurance Group ID: 1 00:12:06.455 Initialization complete. 00:12:06.455 00:12:06.455 ================================== 00:12:06.455 == FDP tests for Namespace: #01 == 00:12:06.455 ================================== 00:12:06.455 00:12:06.455 Get Feature: FDP: 00:12:06.455 ================= 00:12:06.455 Enabled: Yes 00:12:06.455 FDP configuration Index: 0 00:12:06.455 00:12:06.455 FDP configurations log page 00:12:06.455 =========================== 00:12:06.455 Number of FDP configurations: 1 00:12:06.455 Version: 0 00:12:06.455 Size: 112 00:12:06.455 FDP Configuration Descriptor: 0 00:12:06.455 Descriptor Size: 96 00:12:06.455 Reclaim Group Identifier format: 2 00:12:06.455 FDP Volatile Write Cache: Not Present 00:12:06.455 FDP Configuration: Valid 00:12:06.455 Vendor Specific Size: 0 00:12:06.455 Number of Reclaim Groups: 2 00:12:06.455 Number of Recalim Unit Handles: 8 00:12:06.455 Max Placement Identifiers: 128 00:12:06.455 Number of Namespaces Suppprted: 256 00:12:06.455 Reclaim unit Nominal Size: 6000000 bytes 00:12:06.455 Estimated Reclaim Unit Time Limit: Not Reported 00:12:06.455 RUH Desc #000: RUH Type: Initially Isolated 00:12:06.455 RUH Desc #001: RUH Type: Initially Isolated 00:12:06.455 RUH Desc #002: RUH Type: Initially Isolated 00:12:06.455 RUH Desc #003: RUH Type: Initially Isolated 00:12:06.455 RUH Desc #004: RUH Type: Initially Isolated 00:12:06.455 RUH Desc #005: RUH Type: Initially Isolated 00:12:06.455 RUH Desc #006: RUH Type: Initially Isolated 00:12:06.455 RUH Desc #007: RUH Type: Initially Isolated 00:12:06.455 00:12:06.455 FDP reclaim unit handle usage log page 00:12:06.455 ====================================== 00:12:06.455 Number of Reclaim Unit Handles: 8 00:12:06.455 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:12:06.455 RUH Usage Desc #001: RUH Attributes: Unused 00:12:06.455 RUH Usage Desc #002: RUH Attributes: Unused 00:12:06.455 RUH Usage Desc #003: RUH Attributes: Unused 00:12:06.455 RUH Usage Desc #004: RUH Attributes: Unused 00:12:06.455 RUH Usage Desc #005: RUH Attributes: Unused 00:12:06.455 RUH Usage Desc #006: RUH Attributes: Unused 00:12:06.455 RUH Usage Desc #007: RUH Attributes: Unused 00:12:06.455 00:12:06.455 FDP statistics log page 00:12:06.455 ======================= 00:12:06.455 Host bytes with metadata written: 1603620864 00:12:06.455 Media bytes with metadata written: 1605173248 00:12:06.455 Media bytes erased: 0 00:12:06.455 00:12:06.455 FDP Reclaim unit handle status 00:12:06.455 ============================== 00:12:06.455 Number of RUHS descriptors: 2 00:12:06.455 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x00000000000006ab 00:12:06.455 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:12:06.455 00:12:06.455 FDP write on placement id: 0 success 00:12:06.455 00:12:06.456 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:12:06.456 00:12:06.456 IO mgmt send: RUH update for Placement ID: #0 Success 00:12:06.456 00:12:06.456 Get Feature: FDP Events for Placement handle: #0 00:12:06.456 ======================== 00:12:06.456 Number of FDP Events: 6 00:12:06.456 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:12:06.456 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:12:06.456 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:12:06.456 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:12:06.456 FDP Event: #4 Type: Media Reallocated Enabled: No 00:12:06.456 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:12:06.456 00:12:06.456 FDP events log page 00:12:06.456 =================== 00:12:06.456 Number of FDP events: 1 00:12:06.456 FDP Event #0: 00:12:06.456 Event Type: RU Not Written to Capacity 00:12:06.456 Placement Identifier: Valid 00:12:06.456 NSID: Valid 00:12:06.456 Location: Valid 00:12:06.456 Placement Identifier: 0 00:12:06.456 Event Timestamp: 3 00:12:06.456 Namespace Identifier: 1 00:12:06.456 Reclaim Group Identifier: 0 00:12:06.456 Reclaim Unit Handle Identifier: 0 00:12:06.456 00:12:06.456 FDP test passed 00:12:06.456 00:12:06.456 real 0m0.234s 00:12:06.456 user 0m0.067s 00:12:06.456 sys 0m0.066s 00:12:06.456 ************************************ 00:12:06.456 END TEST nvme_flexible_data_placement 00:12:06.456 ************************************ 00:12:06.456 18:30:06 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:06.456 18:30:06 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:12:06.456 ************************************ 00:12:06.456 END TEST nvme_fdp 00:12:06.456 ************************************ 00:12:06.456 00:12:06.456 real 0m8.405s 00:12:06.456 user 0m1.244s 00:12:06.456 sys 0m2.211s 00:12:06.456 18:30:06 nvme_fdp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:06.456 18:30:06 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:12:06.456 18:30:06 -- spdk/autotest.sh@236 -- # [[ '' -eq 1 ]] 00:12:06.456 18:30:06 -- spdk/autotest.sh@240 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:12:06.456 18:30:06 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:06.456 18:30:06 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:06.456 18:30:06 -- common/autotest_common.sh@10 -- # set +x 00:12:06.456 ************************************ 00:12:06.456 START TEST nvme_rpc 00:12:06.456 ************************************ 00:12:06.456 18:30:06 nvme_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:12:06.716 * Looking for test storage... 00:12:06.716 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:06.716 18:30:06 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:06.716 18:30:06 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:12:06.716 18:30:06 nvme_rpc -- common/autotest_common.sh@1520 -- # bdfs=() 00:12:06.716 18:30:06 nvme_rpc -- common/autotest_common.sh@1520 -- # local bdfs 00:12:06.716 18:30:06 nvme_rpc -- common/autotest_common.sh@1521 -- # bdfs=($(get_nvme_bdfs)) 00:12:06.716 18:30:06 nvme_rpc -- common/autotest_common.sh@1521 -- # get_nvme_bdfs 00:12:06.716 18:30:06 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:12:06.716 18:30:06 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:12:06.716 18:30:06 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:06.716 18:30:06 nvme_rpc -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:06.716 18:30:06 nvme_rpc -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:12:06.716 18:30:06 nvme_rpc -- common/autotest_common.sh@1511 -- # (( 4 == 0 )) 00:12:06.716 18:30:06 nvme_rpc -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:12:06.716 18:30:06 nvme_rpc -- common/autotest_common.sh@1523 -- # echo 0000:00:10.0 00:12:06.716 18:30:06 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:12:06.716 18:30:06 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=83565 00:12:06.716 18:30:06 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:12:06.716 18:30:06 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:12:06.716 18:30:06 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 83565 00:12:06.716 18:30:06 nvme_rpc -- common/autotest_common.sh@827 -- # '[' -z 83565 ']' 00:12:06.716 18:30:06 nvme_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:06.716 18:30:06 nvme_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:06.716 18:30:06 nvme_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:06.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:06.716 18:30:06 nvme_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:06.716 18:30:06 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:06.716 [2024-07-23 18:30:06.744699] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:12:06.716 [2024-07-23 18:30:06.744919] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83565 ] 00:12:06.976 [2024-07-23 18:30:06.891475] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:06.976 [2024-07-23 18:30:06.963251] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:06.976 [2024-07-23 18:30:06.963350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:07.545 18:30:07 nvme_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:07.545 18:30:07 nvme_rpc -- common/autotest_common.sh@860 -- # return 0 00:12:07.545 18:30:07 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:12:07.804 Nvme0n1 00:12:07.804 18:30:07 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:12:07.804 18:30:07 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:12:08.063 request: 00:12:08.064 { 00:12:08.064 "filename": "non_existing_file", 00:12:08.064 "bdev_name": "Nvme0n1", 00:12:08.064 "method": "bdev_nvme_apply_firmware", 00:12:08.064 "req_id": 1 00:12:08.064 } 00:12:08.064 Got JSON-RPC error response 00:12:08.064 response: 00:12:08.064 { 00:12:08.064 "code": -32603, 00:12:08.064 "message": "open file failed." 00:12:08.064 } 00:12:08.064 18:30:07 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:12:08.064 18:30:07 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:12:08.064 18:30:07 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:12:08.064 18:30:08 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:08.064 18:30:08 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 83565 00:12:08.064 18:30:08 nvme_rpc -- common/autotest_common.sh@946 -- # '[' -z 83565 ']' 00:12:08.064 18:30:08 nvme_rpc -- common/autotest_common.sh@950 -- # kill -0 83565 00:12:08.064 18:30:08 nvme_rpc -- common/autotest_common.sh@951 -- # uname 00:12:08.064 18:30:08 nvme_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:08.064 18:30:08 nvme_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 83565 00:12:08.323 killing process with pid 83565 00:12:08.323 18:30:08 nvme_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:08.323 18:30:08 nvme_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:08.323 18:30:08 nvme_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 83565' 00:12:08.323 18:30:08 nvme_rpc -- common/autotest_common.sh@965 -- # kill 83565 00:12:08.323 18:30:08 nvme_rpc -- common/autotest_common.sh@970 -- # wait 83565 00:12:08.894 ************************************ 00:12:08.894 END TEST nvme_rpc 00:12:08.894 ************************************ 00:12:08.894 00:12:08.894 real 0m2.313s 00:12:08.894 user 0m3.987s 00:12:08.894 sys 0m0.747s 00:12:08.894 18:30:08 nvme_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:08.894 18:30:08 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:08.894 18:30:08 -- spdk/autotest.sh@241 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:12:08.894 18:30:08 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:08.894 18:30:08 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:08.894 18:30:08 -- common/autotest_common.sh@10 -- # set +x 00:12:08.894 ************************************ 00:12:08.894 START TEST nvme_rpc_timeouts 00:12:08.894 ************************************ 00:12:08.894 18:30:08 nvme_rpc_timeouts -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:12:08.894 * Looking for test storage... 00:12:08.894 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:08.894 18:30:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:08.894 18:30:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_83620 00:12:08.894 18:30:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_83620 00:12:08.894 18:30:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=83644 00:12:08.894 18:30:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:12:08.894 18:30:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:12:08.894 18:30:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 83644 00:12:08.894 18:30:08 nvme_rpc_timeouts -- common/autotest_common.sh@827 -- # '[' -z 83644 ']' 00:12:08.894 18:30:08 nvme_rpc_timeouts -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:08.894 18:30:08 nvme_rpc_timeouts -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:08.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:08.894 18:30:08 nvme_rpc_timeouts -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:08.894 18:30:08 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:08.894 18:30:08 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:12:09.153 [2024-07-23 18:30:09.029926] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:12:09.153 [2024-07-23 18:30:09.030064] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83644 ] 00:12:09.153 [2024-07-23 18:30:09.177031] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:09.413 [2024-07-23 18:30:09.252472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:09.413 [2024-07-23 18:30:09.252628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:10.004 18:30:09 nvme_rpc_timeouts -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:10.004 18:30:09 nvme_rpc_timeouts -- common/autotest_common.sh@860 -- # return 0 00:12:10.004 18:30:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:12:10.004 Checking default timeout settings: 00:12:10.004 18:30:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:12:10.280 18:30:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:12:10.280 Making settings changes with rpc: 00:12:10.280 18:30:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:12:10.280 18:30:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:12:10.280 Check default vs. modified settings: 00:12:10.280 18:30:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:12:10.539 18:30:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:12:10.539 18:30:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:12:10.539 18:30:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_83620 00:12:10.539 18:30:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:12:10.539 18:30:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:10.539 18:30:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:12:10.539 18:30:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_83620 00:12:10.539 18:30:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:12:10.539 18:30:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:10.798 18:30:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:12:10.798 18:30:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:12:10.798 18:30:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:12:10.798 Setting action_on_timeout is changed as expected. 00:12:10.798 18:30:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:12:10.798 18:30:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_83620 00:12:10.798 18:30:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:12:10.798 18:30:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:10.798 18:30:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:12:10.798 18:30:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_83620 00:12:10.798 18:30:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:12:10.798 18:30:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:10.798 18:30:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:12:10.798 Setting timeout_us is changed as expected. 00:12:10.798 18:30:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:12:10.798 18:30:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:12:10.798 18:30:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:12:10.798 18:30:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_83620 00:12:10.798 18:30:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:12:10.798 18:30:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:10.798 18:30:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:12:10.798 18:30:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_83620 00:12:10.798 18:30:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:10.798 18:30:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:12:10.798 Setting timeout_admin_us is changed as expected. 00:12:10.798 18:30:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:12:10.798 18:30:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:12:10.798 18:30:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:12:10.798 18:30:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:12:10.798 18:30:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_83620 /tmp/settings_modified_83620 00:12:10.798 18:30:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 83644 00:12:10.798 18:30:10 nvme_rpc_timeouts -- common/autotest_common.sh@946 -- # '[' -z 83644 ']' 00:12:10.798 18:30:10 nvme_rpc_timeouts -- common/autotest_common.sh@950 -- # kill -0 83644 00:12:10.798 18:30:10 nvme_rpc_timeouts -- common/autotest_common.sh@951 -- # uname 00:12:10.798 18:30:10 nvme_rpc_timeouts -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:10.798 18:30:10 nvme_rpc_timeouts -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 83644 00:12:10.798 killing process with pid 83644 00:12:10.798 18:30:10 nvme_rpc_timeouts -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:10.798 18:30:10 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:10.798 18:30:10 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # echo 'killing process with pid 83644' 00:12:10.798 18:30:10 nvme_rpc_timeouts -- common/autotest_common.sh@965 -- # kill 83644 00:12:10.798 18:30:10 nvme_rpc_timeouts -- common/autotest_common.sh@970 -- # wait 83644 00:12:11.367 RPC TIMEOUT SETTING TEST PASSED. 00:12:11.367 18:30:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:12:11.367 ************************************ 00:12:11.367 END TEST nvme_rpc_timeouts 00:12:11.367 ************************************ 00:12:11.367 00:12:11.367 real 0m2.503s 00:12:11.367 user 0m4.557s 00:12:11.367 sys 0m0.780s 00:12:11.367 18:30:11 nvme_rpc_timeouts -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:11.367 18:30:11 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:12:11.367 18:30:11 -- spdk/autotest.sh@243 -- # uname -s 00:12:11.367 18:30:11 -- spdk/autotest.sh@243 -- # '[' Linux = Linux ']' 00:12:11.367 18:30:11 -- spdk/autotest.sh@244 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:12:11.367 18:30:11 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:11.367 18:30:11 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:11.367 18:30:11 -- common/autotest_common.sh@10 -- # set +x 00:12:11.367 ************************************ 00:12:11.367 START TEST sw_hotplug 00:12:11.367 ************************************ 00:12:11.367 18:30:11 sw_hotplug -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:12:11.627 * Looking for test storage... 00:12:11.627 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:11.627 18:30:11 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:12.196 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:12.196 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:12.196 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:12.196 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:12.196 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:12.456 18:30:12 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # hotplug_wait=6 00:12:12.456 18:30:12 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # hotplug_events=3 00:12:12.456 18:30:12 sw_hotplug -- nvme/sw_hotplug.sh@126 -- # nvmes=($(nvme_in_userspace)) 00:12:12.456 18:30:12 sw_hotplug -- nvme/sw_hotplug.sh@126 -- # nvme_in_userspace 00:12:12.456 18:30:12 sw_hotplug -- scripts/common.sh@309 -- # local bdf bdfs 00:12:12.456 18:30:12 sw_hotplug -- scripts/common.sh@310 -- # local nvmes 00:12:12.456 18:30:12 sw_hotplug -- scripts/common.sh@312 -- # [[ -n '' ]] 00:12:12.456 18:30:12 sw_hotplug -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:12:12.456 18:30:12 sw_hotplug -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:12:12.456 18:30:12 sw_hotplug -- scripts/common.sh@295 -- # local bdf= 00:12:12.456 18:30:12 sw_hotplug -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:12:12.456 18:30:12 sw_hotplug -- scripts/common.sh@230 -- # local class 00:12:12.456 18:30:12 sw_hotplug -- scripts/common.sh@231 -- # local subclass 00:12:12.456 18:30:12 sw_hotplug -- scripts/common.sh@232 -- # local progif 00:12:12.456 18:30:12 sw_hotplug -- scripts/common.sh@233 -- # printf %02x 1 00:12:12.456 18:30:12 sw_hotplug -- scripts/common.sh@233 -- # class=01 00:12:12.456 18:30:12 sw_hotplug -- scripts/common.sh@234 -- # printf %02x 8 00:12:12.456 18:30:12 sw_hotplug -- scripts/common.sh@234 -- # subclass=08 00:12:12.456 18:30:12 sw_hotplug -- scripts/common.sh@235 -- # printf %02x 2 00:12:12.456 18:30:12 sw_hotplug -- scripts/common.sh@235 -- # progif=02 00:12:12.456 18:30:12 sw_hotplug -- scripts/common.sh@237 -- # hash lspci 00:12:12.456 18:30:12 sw_hotplug -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:12:12.456 18:30:12 sw_hotplug -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:12:12.456 18:30:12 sw_hotplug -- scripts/common.sh@239 -- # lspci -mm -n -D 00:12:12.456 18:30:12 sw_hotplug -- scripts/common.sh@240 -- # grep -i -- -p02 00:12:12.456 18:30:12 sw_hotplug -- scripts/common.sh@242 -- # tr -d '"' 00:12:12.456 18:30:12 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:12:12.456 18:30:12 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:12:12.456 18:30:12 sw_hotplug -- scripts/common.sh@15 -- # local i 00:12:12.456 18:30:12 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:12:12.456 18:30:12 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:12.456 18:30:12 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:12:12.456 18:30:12 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:12:12.456 18:30:12 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:12:12.456 18:30:12 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:12:12.456 18:30:12 sw_hotplug -- scripts/common.sh@15 -- # local i 00:12:12.456 18:30:12 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:12:12.456 18:30:12 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:12.456 18:30:12 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:12:12.456 18:30:12 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:12:12.456 18:30:12 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:12:12.456 18:30:12 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:12.0 00:12:12.456 18:30:12 sw_hotplug -- scripts/common.sh@15 -- # local i 00:12:12.456 18:30:12 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:12:12.456 18:30:12 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:12.456 18:30:12 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:12:12.456 18:30:12 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:12.0 00:12:12.456 18:30:12 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:12:12.456 18:30:12 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:13.0 00:12:12.456 18:30:12 sw_hotplug -- scripts/common.sh@15 -- # local i 00:12:12.456 18:30:12 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:12:12.456 18:30:12 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:12.456 18:30:12 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:12:12.456 18:30:12 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:13.0 00:12:12.456 18:30:12 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:12:12.456 18:30:12 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:12:12.456 18:30:12 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:12:12.456 18:30:12 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:12:12.456 18:30:12 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:12:12.456 18:30:12 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:12:12.456 18:30:12 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:12:12.456 18:30:12 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:12:12.456 18:30:12 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:12:12.457 18:30:12 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:12:12.457 18:30:12 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:12:12.457 18:30:12 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:12:12.457 18:30:12 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:12:12.457 18:30:12 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:12:12.457 18:30:12 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:12:12.457 18:30:12 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:12:12.457 18:30:12 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:12:12.457 18:30:12 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:12:12.457 18:30:12 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:12:12.457 18:30:12 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:12:12.457 18:30:12 sw_hotplug -- scripts/common.sh@325 -- # (( 4 )) 00:12:12.457 18:30:12 sw_hotplug -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:12:12.457 18:30:12 sw_hotplug -- nvme/sw_hotplug.sh@127 -- # nvme_count=2 00:12:12.457 18:30:12 sw_hotplug -- nvme/sw_hotplug.sh@128 -- # nvmes=("${nvmes[@]::nvme_count}") 00:12:12.457 18:30:12 sw_hotplug -- nvme/sw_hotplug.sh@130 -- # xtrace_disable 00:12:12.457 18:30:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:12.457 18:30:12 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # run_hotplug 00:12:12.457 18:30:12 sw_hotplug -- nvme/sw_hotplug.sh@65 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:12:12.457 18:30:12 sw_hotplug -- nvme/sw_hotplug.sh@73 -- # hotplug_pid=83991 00:12:12.457 18:30:12 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:12:12.457 18:30:12 sw_hotplug -- nvme/sw_hotplug.sh@75 -- # debug_remove_attach_helper 3 6 false 00:12:12.457 18:30:12 sw_hotplug -- nvme/sw_hotplug.sh@14 -- # local helper_time=0 00:12:12.457 18:30:12 sw_hotplug -- nvme/sw_hotplug.sh@16 -- # timing_cmd remove_attach_helper 3 6 false 00:12:12.457 18:30:12 sw_hotplug -- common/autotest_common.sh@706 -- # [[ -t 0 ]] 00:12:12.457 18:30:12 sw_hotplug -- common/autotest_common.sh@706 -- # exec 00:12:12.457 18:30:12 sw_hotplug -- common/autotest_common.sh@708 -- # local time=0 TIMEFORMAT=%2R 00:12:12.457 18:30:12 sw_hotplug -- common/autotest_common.sh@714 -- # remove_attach_helper 3 6 false 00:12:12.457 18:30:12 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # local hotplug_events=3 00:12:12.457 18:30:12 sw_hotplug -- nvme/sw_hotplug.sh@23 -- # local hotplug_wait=6 00:12:12.457 18:30:12 sw_hotplug -- nvme/sw_hotplug.sh@24 -- # local use_bdev=false 00:12:12.457 18:30:12 sw_hotplug -- nvme/sw_hotplug.sh@25 -- # local dev bdfs 00:12:12.457 18:30:12 sw_hotplug -- nvme/sw_hotplug.sh@31 -- # sleep 6 00:12:12.716 Initializing NVMe Controllers 00:12:12.716 Attaching to 0000:00:10.0 00:12:12.716 Attaching to 0000:00:11.0 00:12:12.716 Attaching to 0000:00:12.0 00:12:12.717 Attaching to 0000:00:13.0 00:12:12.717 Attached to 0000:00:10.0 00:12:12.717 Attached to 0000:00:11.0 00:12:12.717 Attached to 0000:00:13.0 00:12:12.717 Attached to 0000:00:12.0 00:12:12.717 Initialization complete. Starting I/O... 00:12:12.717 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:12:12.717 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:12:12.717 QEMU NVMe Ctrl (12343 ): 0 I/Os completed (+0) 00:12:12.717 QEMU NVMe Ctrl (12342 ): 0 I/Os completed (+0) 00:12:12.717 00:12:13.655 QEMU NVMe Ctrl (12340 ): 1628 I/Os completed (+1628) 00:12:13.655 QEMU NVMe Ctrl (12341 ): 1633 I/Os completed (+1633) 00:12:13.655 QEMU NVMe Ctrl (12343 ): 1631 I/Os completed (+1631) 00:12:13.655 QEMU NVMe Ctrl (12342 ): 1631 I/Os completed (+1631) 00:12:13.655 00:12:15.034 QEMU NVMe Ctrl (12340 ): 3656 I/Os completed (+2028) 00:12:15.034 QEMU NVMe Ctrl (12341 ): 3668 I/Os completed (+2035) 00:12:15.034 QEMU NVMe Ctrl (12343 ): 3665 I/Os completed (+2034) 00:12:15.034 QEMU NVMe Ctrl (12342 ): 3668 I/Os completed (+2037) 00:12:15.034 00:12:15.972 QEMU NVMe Ctrl (12340 ): 5920 I/Os completed (+2264) 00:12:15.972 QEMU NVMe Ctrl (12341 ): 5932 I/Os completed (+2264) 00:12:15.972 QEMU NVMe Ctrl (12343 ): 5932 I/Os completed (+2267) 00:12:15.972 QEMU NVMe Ctrl (12342 ): 5941 I/Os completed (+2273) 00:12:15.972 00:12:16.912 QEMU NVMe Ctrl (12340 ): 8208 I/Os completed (+2288) 00:12:16.912 QEMU NVMe Ctrl (12341 ): 8231 I/Os completed (+2299) 00:12:16.912 QEMU NVMe Ctrl (12343 ): 8224 I/Os completed (+2292) 00:12:16.912 QEMU NVMe Ctrl (12342 ): 8235 I/Os completed (+2294) 00:12:16.912 00:12:17.851 QEMU NVMe Ctrl (12340 ): 10488 I/Os completed (+2280) 00:12:17.851 QEMU NVMe Ctrl (12341 ): 10511 I/Os completed (+2280) 00:12:17.851 QEMU NVMe Ctrl (12343 ): 10504 I/Os completed (+2280) 00:12:17.851 QEMU NVMe Ctrl (12342 ): 10517 I/Os completed (+2282) 00:12:17.851 00:12:18.790 18:30:18 sw_hotplug -- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- )) 00:12:18.790 18:30:18 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:12:18.790 18:30:18 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:12:18.790 [2024-07-23 18:30:18.483885] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:18.790 Controller removed: QEMU NVMe Ctrl (12340 ) 00:12:18.790 [2024-07-23 18:30:18.485295] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:18.790 [2024-07-23 18:30:18.485390] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:18.790 [2024-07-23 18:30:18.485433] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:18.790 [2024-07-23 18:30:18.485465] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:18.790 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:12:18.790 [2024-07-23 18:30:18.487595] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:18.790 [2024-07-23 18:30:18.487679] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:18.790 [2024-07-23 18:30:18.487717] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:18.790 [2024-07-23 18:30:18.487754] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:18.790 18:30:18 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:12:18.790 18:30:18 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:12:18.790 [2024-07-23 18:30:18.520862] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:18.790 Controller removed: QEMU NVMe Ctrl (12341 ) 00:12:18.790 [2024-07-23 18:30:18.521985] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:18.790 [2024-07-23 18:30:18.522068] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:18.790 [2024-07-23 18:30:18.522108] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:18.790 [2024-07-23 18:30:18.522137] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:18.790 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:12:18.790 [2024-07-23 18:30:18.523654] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:18.790 [2024-07-23 18:30:18.523709] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:18.790 [2024-07-23 18:30:18.523751] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:18.790 [2024-07-23 18:30:18.523782] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:18.790 18:30:18 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # false 00:12:18.790 18:30:18 sw_hotplug -- nvme/sw_hotplug.sh@44 -- # echo 1 00:12:18.790 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:12:18.790 EAL: Scan for (pci) bus failed. 00:12:18.790 18:30:18 sw_hotplug -- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}" 00:12:18.790 18:30:18 sw_hotplug -- nvme/sw_hotplug.sh@47 -- # echo uio_pci_generic 00:12:18.790 18:30:18 sw_hotplug -- nvme/sw_hotplug.sh@48 -- # echo 0000:00:10.0 00:12:18.790 QEMU NVMe Ctrl (12343 ): 12838 I/Os completed (+2334) 00:12:18.790 QEMU NVMe Ctrl (12342 ): 12856 I/Os completed (+2339) 00:12:18.790 00:12:18.790 18:30:18 sw_hotplug -- nvme/sw_hotplug.sh@49 -- # echo 0000:00:10.0 00:12:18.790 18:30:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # echo '' 00:12:18.790 18:30:18 sw_hotplug -- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}" 00:12:18.790 18:30:18 sw_hotplug -- nvme/sw_hotplug.sh@47 -- # echo uio_pci_generic 00:12:18.790 18:30:18 sw_hotplug -- nvme/sw_hotplug.sh@48 -- # echo 0000:00:11.0 00:12:18.790 Attaching to 0000:00:10.0 00:12:18.790 Attached to 0000:00:10.0 00:12:18.790 18:30:18 sw_hotplug -- nvme/sw_hotplug.sh@49 -- # echo 0000:00:11.0 00:12:18.790 18:30:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # echo '' 00:12:18.790 18:30:18 sw_hotplug -- nvme/sw_hotplug.sh@54 -- # sleep 12 00:12:18.790 Attaching to 0000:00:11.0 00:12:18.790 Attached to 0000:00:11.0 00:12:19.729 QEMU NVMe Ctrl (12343 ): 15150 I/Os completed (+2312) 00:12:19.729 QEMU NVMe Ctrl (12342 ): 15173 I/Os completed (+2317) 00:12:19.729 QEMU NVMe Ctrl (12340 ): 2172 I/Os completed (+2172) 00:12:19.729 QEMU NVMe Ctrl (12341 ): 1955 I/Os completed (+1955) 00:12:19.729 00:12:20.669 QEMU NVMe Ctrl (12343 ): 17386 I/Os completed (+2236) 00:12:20.669 QEMU NVMe Ctrl (12342 ): 17409 I/Os completed (+2236) 00:12:20.669 QEMU NVMe Ctrl (12340 ): 4408 I/Os completed (+2236) 00:12:20.669 QEMU NVMe Ctrl (12341 ): 4202 I/Os completed (+2247) 00:12:20.669 00:12:22.050 QEMU NVMe Ctrl (12343 ): 19642 I/Os completed (+2256) 00:12:22.050 QEMU NVMe Ctrl (12342 ): 19665 I/Os completed (+2256) 00:12:22.050 QEMU NVMe Ctrl (12340 ): 6665 I/Os completed (+2257) 00:12:22.050 QEMU NVMe Ctrl (12341 ): 6459 I/Os completed (+2257) 00:12:22.050 00:12:22.620 QEMU NVMe Ctrl (12343 ): 21882 I/Os completed (+2240) 00:12:22.620 QEMU NVMe Ctrl (12342 ): 21909 I/Os completed (+2244) 00:12:22.620 QEMU NVMe Ctrl (12340 ): 8905 I/Os completed (+2240) 00:12:22.620 QEMU NVMe Ctrl (12341 ): 8699 I/Os completed (+2240) 00:12:22.620 00:12:24.000 QEMU NVMe Ctrl (12343 ): 24146 I/Os completed (+2264) 00:12:24.000 QEMU NVMe Ctrl (12342 ): 24173 I/Os completed (+2264) 00:12:24.000 QEMU NVMe Ctrl (12340 ): 11169 I/Os completed (+2264) 00:12:24.000 QEMU NVMe Ctrl (12341 ): 10963 I/Os completed (+2264) 00:12:24.000 00:12:24.939 QEMU NVMe Ctrl (12343 ): 26410 I/Os completed (+2264) 00:12:24.939 QEMU NVMe Ctrl (12342 ): 26441 I/Os completed (+2268) 00:12:24.939 QEMU NVMe Ctrl (12340 ): 13433 I/Os completed (+2264) 00:12:24.939 QEMU NVMe Ctrl (12341 ): 13227 I/Os completed (+2264) 00:12:24.939 00:12:25.879 QEMU NVMe Ctrl (12343 ): 28690 I/Os completed (+2280) 00:12:25.879 QEMU NVMe Ctrl (12342 ): 28721 I/Os completed (+2280) 00:12:25.879 QEMU NVMe Ctrl (12340 ): 15713 I/Os completed (+2280) 00:12:25.879 QEMU NVMe Ctrl (12341 ): 15511 I/Os completed (+2284) 00:12:25.879 00:12:26.819 QEMU NVMe Ctrl (12343 ): 30934 I/Os completed (+2244) 00:12:26.819 QEMU NVMe Ctrl (12342 ): 30971 I/Os completed (+2250) 00:12:26.819 QEMU NVMe Ctrl (12340 ): 17967 I/Os completed (+2254) 00:12:26.819 QEMU NVMe Ctrl (12341 ): 17756 I/Os completed (+2245) 00:12:26.819 00:12:27.759 QEMU NVMe Ctrl (12343 ): 33170 I/Os completed (+2236) 00:12:27.759 QEMU NVMe Ctrl (12342 ): 33211 I/Os completed (+2240) 00:12:27.759 QEMU NVMe Ctrl (12340 ): 20205 I/Os completed (+2238) 00:12:27.759 QEMU NVMe Ctrl (12341 ): 19994 I/Os completed (+2238) 00:12:27.759 00:12:28.699 QEMU NVMe Ctrl (12343 ): 35418 I/Os completed (+2248) 00:12:28.699 QEMU NVMe Ctrl (12342 ): 35473 I/Os completed (+2262) 00:12:28.699 QEMU NVMe Ctrl (12340 ): 22455 I/Os completed (+2250) 00:12:28.699 QEMU NVMe Ctrl (12341 ): 22244 I/Os completed (+2250) 00:12:28.699 00:12:29.648 QEMU NVMe Ctrl (12343 ): 37642 I/Os completed (+2224) 00:12:29.648 QEMU NVMe Ctrl (12342 ): 37698 I/Os completed (+2225) 00:12:29.648 QEMU NVMe Ctrl (12340 ): 24679 I/Os completed (+2224) 00:12:29.648 QEMU NVMe Ctrl (12341 ): 24478 I/Os completed (+2234) 00:12:29.648 00:12:30.619 QEMU NVMe Ctrl (12343 ): 39926 I/Os completed (+2284) 00:12:30.619 QEMU NVMe Ctrl (12342 ): 39982 I/Os completed (+2284) 00:12:30.619 QEMU NVMe Ctrl (12340 ): 26965 I/Os completed (+2286) 00:12:30.619 QEMU NVMe Ctrl (12341 ): 26764 I/Os completed (+2286) 00:12:30.619 00:12:30.879 18:30:30 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # false 00:12:30.879 18:30:30 sw_hotplug -- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- )) 00:12:30.879 18:30:30 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:12:30.879 18:30:30 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:12:30.879 [2024-07-23 18:30:30.808814] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:30.879 Controller removed: QEMU NVMe Ctrl (12340 ) 00:12:30.879 [2024-07-23 18:30:30.810246] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:30.879 [2024-07-23 18:30:30.810347] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:30.879 [2024-07-23 18:30:30.810388] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:30.879 [2024-07-23 18:30:30.810431] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:30.879 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:12:30.879 [2024-07-23 18:30:30.812219] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:30.879 [2024-07-23 18:30:30.812280] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:30.879 [2024-07-23 18:30:30.812316] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:30.879 [2024-07-23 18:30:30.812358] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:30.879 18:30:30 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:12:30.879 18:30:30 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:12:30.879 [2024-07-23 18:30:30.848909] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:30.879 Controller removed: QEMU NVMe Ctrl (12341 ) 00:12:30.879 [2024-07-23 18:30:30.850245] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:30.879 [2024-07-23 18:30:30.850327] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:30.879 [2024-07-23 18:30:30.850389] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:30.879 [2024-07-23 18:30:30.850425] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:30.879 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:12:30.879 [2024-07-23 18:30:30.851932] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:30.879 [2024-07-23 18:30:30.851984] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:30.879 [2024-07-23 18:30:30.852021] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:30.879 [2024-07-23 18:30:30.852053] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:30.879 18:30:30 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # false 00:12:30.879 18:30:30 sw_hotplug -- nvme/sw_hotplug.sh@44 -- # echo 1 00:12:31.140 18:30:30 sw_hotplug -- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}" 00:12:31.140 18:30:30 sw_hotplug -- nvme/sw_hotplug.sh@47 -- # echo uio_pci_generic 00:12:31.140 18:30:30 sw_hotplug -- nvme/sw_hotplug.sh@48 -- # echo 0000:00:10.0 00:12:31.140 18:30:31 sw_hotplug -- nvme/sw_hotplug.sh@49 -- # echo 0000:00:10.0 00:12:31.140 18:30:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # echo '' 00:12:31.140 18:30:31 sw_hotplug -- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}" 00:12:31.140 18:30:31 sw_hotplug -- nvme/sw_hotplug.sh@47 -- # echo uio_pci_generic 00:12:31.140 18:30:31 sw_hotplug -- nvme/sw_hotplug.sh@48 -- # echo 0000:00:11.0 00:12:31.140 Attaching to 0000:00:10.0 00:12:31.140 Attached to 0000:00:10.0 00:12:31.140 18:30:31 sw_hotplug -- nvme/sw_hotplug.sh@49 -- # echo 0000:00:11.0 00:12:31.140 18:30:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # echo '' 00:12:31.140 18:30:31 sw_hotplug -- nvme/sw_hotplug.sh@54 -- # sleep 12 00:12:31.140 Attaching to 0000:00:11.0 00:12:31.140 Attached to 0000:00:11.0 00:12:31.710 QEMU NVMe Ctrl (12343 ): 42338 I/Os completed (+2412) 00:12:31.710 QEMU NVMe Ctrl (12342 ): 42387 I/Os completed (+2405) 00:12:31.710 QEMU NVMe Ctrl (12340 ): 1366 I/Os completed (+1366) 00:12:31.710 QEMU NVMe Ctrl (12341 ): 1120 I/Os completed (+1120) 00:12:31.710 00:12:32.650 QEMU NVMe Ctrl (12343 ): 44559 I/Os completed (+2221) 00:12:32.650 QEMU NVMe Ctrl (12342 ): 44606 I/Os completed (+2219) 00:12:32.650 QEMU NVMe Ctrl (12340 ): 3590 I/Os completed (+2224) 00:12:32.650 QEMU NVMe Ctrl (12341 ): 3401 I/Os completed (+2281) 00:12:32.650 00:12:34.026 QEMU NVMe Ctrl (12343 ): 46778 I/Os completed (+2219) 00:12:34.026 QEMU NVMe Ctrl (12342 ): 46822 I/Os completed (+2216) 00:12:34.026 QEMU NVMe Ctrl (12340 ): 5810 I/Os completed (+2220) 00:12:34.026 QEMU NVMe Ctrl (12341 ): 5620 I/Os completed (+2219) 00:12:34.026 00:12:34.594 QEMU NVMe Ctrl (12343 ): 48985 I/Os completed (+2207) 00:12:34.594 QEMU NVMe Ctrl (12342 ): 49031 I/Os completed (+2209) 00:12:34.594 QEMU NVMe Ctrl (12340 ): 8013 I/Os completed (+2203) 00:12:34.594 QEMU NVMe Ctrl (12341 ): 7861 I/Os completed (+2241) 00:12:34.594 00:12:35.973 QEMU NVMe Ctrl (12343 ): 51221 I/Os completed (+2236) 00:12:35.973 QEMU NVMe Ctrl (12342 ): 51272 I/Os completed (+2241) 00:12:35.973 QEMU NVMe Ctrl (12340 ): 10249 I/Os completed (+2236) 00:12:35.973 QEMU NVMe Ctrl (12341 ): 10104 I/Os completed (+2243) 00:12:35.973 00:12:36.913 QEMU NVMe Ctrl (12343 ): 53473 I/Os completed (+2252) 00:12:36.913 QEMU NVMe Ctrl (12342 ): 53524 I/Os completed (+2252) 00:12:36.913 QEMU NVMe Ctrl (12340 ): 12501 I/Os completed (+2252) 00:12:36.913 QEMU NVMe Ctrl (12341 ): 12356 I/Os completed (+2252) 00:12:36.913 00:12:37.851 QEMU NVMe Ctrl (12343 ): 55729 I/Os completed (+2256) 00:12:37.851 QEMU NVMe Ctrl (12342 ): 55784 I/Os completed (+2260) 00:12:37.851 QEMU NVMe Ctrl (12340 ): 14760 I/Os completed (+2259) 00:12:37.851 QEMU NVMe Ctrl (12341 ): 14614 I/Os completed (+2258) 00:12:37.851 00:12:38.789 QEMU NVMe Ctrl (12343 ): 57951 I/Os completed (+2222) 00:12:38.789 QEMU NVMe Ctrl (12342 ): 58018 I/Os completed (+2234) 00:12:38.789 QEMU NVMe Ctrl (12340 ): 17045 I/Os completed (+2285) 00:12:38.789 QEMU NVMe Ctrl (12341 ): 16838 I/Os completed (+2224) 00:12:38.789 00:12:39.728 QEMU NVMe Ctrl (12343 ): 60191 I/Os completed (+2240) 00:12:39.728 QEMU NVMe Ctrl (12342 ): 60258 I/Os completed (+2240) 00:12:39.728 QEMU NVMe Ctrl (12340 ): 19285 I/Os completed (+2240) 00:12:39.728 QEMU NVMe Ctrl (12341 ): 19078 I/Os completed (+2240) 00:12:39.728 00:12:40.667 QEMU NVMe Ctrl (12343 ): 62435 I/Os completed (+2244) 00:12:40.667 QEMU NVMe Ctrl (12342 ): 62502 I/Os completed (+2244) 00:12:40.667 QEMU NVMe Ctrl (12340 ): 21531 I/Os completed (+2246) 00:12:40.667 QEMU NVMe Ctrl (12341 ): 21324 I/Os completed (+2246) 00:12:40.667 00:12:41.607 QEMU NVMe Ctrl (12343 ): 64683 I/Os completed (+2248) 00:12:41.607 QEMU NVMe Ctrl (12342 ): 64751 I/Os completed (+2249) 00:12:41.607 QEMU NVMe Ctrl (12340 ): 23780 I/Os completed (+2249) 00:12:41.607 QEMU NVMe Ctrl (12341 ): 23572 I/Os completed (+2248) 00:12:41.607 00:12:42.988 QEMU NVMe Ctrl (12343 ): 66915 I/Os completed (+2232) 00:12:42.988 QEMU NVMe Ctrl (12342 ): 66983 I/Os completed (+2232) 00:12:42.988 QEMU NVMe Ctrl (12340 ): 26012 I/Os completed (+2232) 00:12:42.988 QEMU NVMe Ctrl (12341 ): 25806 I/Os completed (+2234) 00:12:42.988 00:12:43.249 18:30:43 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # false 00:12:43.249 18:30:43 sw_hotplug -- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- )) 00:12:43.249 18:30:43 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:12:43.249 18:30:43 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:12:43.249 [2024-07-23 18:30:43.149033] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:43.249 Controller removed: QEMU NVMe Ctrl (12340 ) 00:12:43.249 [2024-07-23 18:30:43.150643] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:43.249 [2024-07-23 18:30:43.150740] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:43.249 [2024-07-23 18:30:43.150778] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:43.249 [2024-07-23 18:30:43.150825] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:43.249 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:12:43.249 [2024-07-23 18:30:43.152487] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:43.249 [2024-07-23 18:30:43.152552] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:43.249 [2024-07-23 18:30:43.152611] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:43.249 [2024-07-23 18:30:43.152649] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:43.249 18:30:43 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:12:43.249 18:30:43 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:12:43.249 [2024-07-23 18:30:43.188718] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:43.249 Controller removed: QEMU NVMe Ctrl (12341 ) 00:12:43.249 [2024-07-23 18:30:43.190035] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:43.249 [2024-07-23 18:30:43.190106] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:43.249 [2024-07-23 18:30:43.190153] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:43.249 [2024-07-23 18:30:43.190192] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:43.249 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:12:43.249 [2024-07-23 18:30:43.192175] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:43.249 [2024-07-23 18:30:43.192240] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:43.249 [2024-07-23 18:30:43.192284] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:43.249 [2024-07-23 18:30:43.192316] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:43.249 18:30:43 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # false 00:12:43.249 18:30:43 sw_hotplug -- nvme/sw_hotplug.sh@44 -- # echo 1 00:12:43.249 18:30:43 sw_hotplug -- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}" 00:12:43.249 18:30:43 sw_hotplug -- nvme/sw_hotplug.sh@47 -- # echo uio_pci_generic 00:12:43.249 18:30:43 sw_hotplug -- nvme/sw_hotplug.sh@48 -- # echo 0000:00:10.0 00:12:43.507 18:30:43 sw_hotplug -- nvme/sw_hotplug.sh@49 -- # echo 0000:00:10.0 00:12:43.507 18:30:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # echo '' 00:12:43.507 18:30:43 sw_hotplug -- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}" 00:12:43.507 18:30:43 sw_hotplug -- nvme/sw_hotplug.sh@47 -- # echo uio_pci_generic 00:12:43.507 18:30:43 sw_hotplug -- nvme/sw_hotplug.sh@48 -- # echo 0000:00:11.0 00:12:43.507 Attaching to 0000:00:10.0 00:12:43.507 Attached to 0000:00:10.0 00:12:43.507 18:30:43 sw_hotplug -- nvme/sw_hotplug.sh@49 -- # echo 0000:00:11.0 00:12:43.507 18:30:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # echo '' 00:12:43.507 18:30:43 sw_hotplug -- nvme/sw_hotplug.sh@54 -- # sleep 12 00:12:43.507 Attaching to 0000:00:11.0 00:12:43.507 Attached to 0000:00:11.0 00:12:43.507 unregister_dev: QEMU NVMe Ctrl (12343 ) 00:12:43.507 unregister_dev: QEMU NVMe Ctrl (12342 ) 00:12:43.507 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:12:43.507 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:12:43.507 [2024-07-23 18:30:43.482290] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:12:55.751 18:30:55 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # false 00:12:55.751 18:30:55 sw_hotplug -- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- )) 00:12:55.751 18:30:55 sw_hotplug -- common/autotest_common.sh@714 -- # time=42.99 00:12:55.751 18:30:55 sw_hotplug -- common/autotest_common.sh@716 -- # echo 42.99 00:12:55.751 18:30:55 sw_hotplug -- nvme/sw_hotplug.sh@16 -- # helper_time=42.99 00:12:55.751 18:30:55 sw_hotplug -- nvme/sw_hotplug.sh@17 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.99 2 00:12:55.751 remove_attach_helper took 42.99s to complete (handling 2 nvme drive(s)) 18:30:55 sw_hotplug -- nvme/sw_hotplug.sh@79 -- # sleep 6 00:13:02.321 18:31:01 sw_hotplug -- nvme/sw_hotplug.sh@81 -- # kill -0 83991 00:13:02.321 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 81: kill: (83991) - No such process 00:13:02.321 18:31:01 sw_hotplug -- nvme/sw_hotplug.sh@83 -- # wait 83991 00:13:02.321 18:31:01 sw_hotplug -- nvme/sw_hotplug.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:02.321 18:31:01 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # tgt_run_hotplug 00:13:02.321 18:31:01 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # local dev 00:13:02.321 18:31:01 sw_hotplug -- nvme/sw_hotplug.sh@98 -- # spdk_tgt_pid=84530 00:13:02.321 18:31:01 sw_hotplug -- nvme/sw_hotplug.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:02.321 18:31:01 sw_hotplug -- nvme/sw_hotplug.sh@100 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:13:02.321 18:31:01 sw_hotplug -- nvme/sw_hotplug.sh@101 -- # waitforlisten 84530 00:13:02.321 18:31:01 sw_hotplug -- common/autotest_common.sh@827 -- # '[' -z 84530 ']' 00:13:02.321 18:31:01 sw_hotplug -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:02.321 18:31:01 sw_hotplug -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:02.321 18:31:01 sw_hotplug -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:02.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:02.321 18:31:01 sw_hotplug -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:02.321 18:31:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:02.321 [2024-07-23 18:31:01.574037] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:13:02.321 [2024-07-23 18:31:01.574247] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84530 ] 00:13:02.321 [2024-07-23 18:31:01.722762] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:02.321 [2024-07-23 18:31:01.791266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:02.321 18:31:02 sw_hotplug -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:02.321 18:31:02 sw_hotplug -- common/autotest_common.sh@860 -- # return 0 00:13:02.321 18:31:02 sw_hotplug -- nvme/sw_hotplug.sh@103 -- # for dev in "${!nvmes[@]}" 00:13:02.321 18:31:02 sw_hotplug -- nvme/sw_hotplug.sh@104 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme00 -t PCIe -a 0000:00:10.0 00:13:02.321 18:31:02 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.321 18:31:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:02.580 Nvme00n1 00:13:02.580 18:31:02 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.580 18:31:02 sw_hotplug -- nvme/sw_hotplug.sh@105 -- # waitforbdev Nvme00n1 6 00:13:02.580 18:31:02 sw_hotplug -- common/autotest_common.sh@895 -- # local bdev_name=Nvme00n1 00:13:02.580 18:31:02 sw_hotplug -- common/autotest_common.sh@896 -- # local bdev_timeout=6 00:13:02.580 18:31:02 sw_hotplug -- common/autotest_common.sh@897 -- # local i 00:13:02.580 18:31:02 sw_hotplug -- common/autotest_common.sh@898 -- # [[ -z 6 ]] 00:13:02.580 18:31:02 sw_hotplug -- common/autotest_common.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:13:02.580 18:31:02 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.580 18:31:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:02.580 18:31:02 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.580 18:31:02 sw_hotplug -- common/autotest_common.sh@902 -- # rpc_cmd bdev_get_bdevs -b Nvme00n1 -t 6 00:13:02.580 18:31:02 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.580 18:31:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:02.580 [ 00:13:02.580 { 00:13:02.580 "name": "Nvme00n1", 00:13:02.580 "aliases": [ 00:13:02.580 "de49a67c-757a-4d66-872d-d24cae6510c5" 00:13:02.581 ], 00:13:02.581 "product_name": "NVMe disk", 00:13:02.581 "block_size": 4096, 00:13:02.581 "num_blocks": 1548666, 00:13:02.581 "uuid": "de49a67c-757a-4d66-872d-d24cae6510c5", 00:13:02.581 "md_size": 64, 00:13:02.581 "md_interleave": false, 00:13:02.581 "dif_type": 0, 00:13:02.581 "assigned_rate_limits": { 00:13:02.581 "rw_ios_per_sec": 0, 00:13:02.581 "rw_mbytes_per_sec": 0, 00:13:02.581 "r_mbytes_per_sec": 0, 00:13:02.581 "w_mbytes_per_sec": 0 00:13:02.581 }, 00:13:02.581 "claimed": false, 00:13:02.581 "zoned": false, 00:13:02.581 "supported_io_types": { 00:13:02.581 "read": true, 00:13:02.581 "write": true, 00:13:02.581 "unmap": true, 00:13:02.581 "write_zeroes": true, 00:13:02.581 "flush": true, 00:13:02.581 "reset": true, 00:13:02.581 "compare": true, 00:13:02.581 "compare_and_write": false, 00:13:02.581 "abort": true, 00:13:02.581 "nvme_admin": true, 00:13:02.581 "nvme_io": true 00:13:02.581 }, 00:13:02.581 "driver_specific": { 00:13:02.581 "nvme": [ 00:13:02.581 { 00:13:02.581 "pci_address": "0000:00:10.0", 00:13:02.581 "trid": { 00:13:02.581 "trtype": "PCIe", 00:13:02.581 "traddr": "0000:00:10.0" 00:13:02.581 }, 00:13:02.581 "ctrlr_data": { 00:13:02.581 "cntlid": 0, 00:13:02.581 "vendor_id": "0x1b36", 00:13:02.581 "model_number": "QEMU NVMe Ctrl", 00:13:02.581 "serial_number": "12340", 00:13:02.581 "firmware_revision": "8.0.0", 00:13:02.581 "subnqn": "nqn.2019-08.org.qemu:12340", 00:13:02.581 "oacs": { 00:13:02.581 "security": 0, 00:13:02.581 "format": 1, 00:13:02.581 "firmware": 0, 00:13:02.581 "ns_manage": 1 00:13:02.581 }, 00:13:02.581 "multi_ctrlr": false, 00:13:02.581 "ana_reporting": false 00:13:02.581 }, 00:13:02.581 "vs": { 00:13:02.581 "nvme_version": "1.4" 00:13:02.581 }, 00:13:02.581 "ns_data": { 00:13:02.581 "id": 1, 00:13:02.581 "can_share": false 00:13:02.581 } 00:13:02.581 } 00:13:02.581 ], 00:13:02.581 "mp_policy": "active_passive" 00:13:02.581 } 00:13:02.581 } 00:13:02.581 ] 00:13:02.581 18:31:02 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.581 18:31:02 sw_hotplug -- common/autotest_common.sh@903 -- # return 0 00:13:02.581 18:31:02 sw_hotplug -- nvme/sw_hotplug.sh@103 -- # for dev in "${!nvmes[@]}" 00:13:02.581 18:31:02 sw_hotplug -- nvme/sw_hotplug.sh@104 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme01 -t PCIe -a 0000:00:11.0 00:13:02.581 18:31:02 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.581 18:31:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:02.581 Nvme01n1 00:13:02.581 18:31:02 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.581 18:31:02 sw_hotplug -- nvme/sw_hotplug.sh@105 -- # waitforbdev Nvme01n1 6 00:13:02.581 18:31:02 sw_hotplug -- common/autotest_common.sh@895 -- # local bdev_name=Nvme01n1 00:13:02.581 18:31:02 sw_hotplug -- common/autotest_common.sh@896 -- # local bdev_timeout=6 00:13:02.581 18:31:02 sw_hotplug -- common/autotest_common.sh@897 -- # local i 00:13:02.581 18:31:02 sw_hotplug -- common/autotest_common.sh@898 -- # [[ -z 6 ]] 00:13:02.581 18:31:02 sw_hotplug -- common/autotest_common.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:13:02.581 18:31:02 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.581 18:31:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:02.581 18:31:02 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.581 18:31:02 sw_hotplug -- common/autotest_common.sh@902 -- # rpc_cmd bdev_get_bdevs -b Nvme01n1 -t 6 00:13:02.581 18:31:02 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.581 18:31:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:02.581 [ 00:13:02.581 { 00:13:02.581 "name": "Nvme01n1", 00:13:02.581 "aliases": [ 00:13:02.581 "8606aee9-08f9-4c64-9e0b-37624c45008c" 00:13:02.581 ], 00:13:02.581 "product_name": "NVMe disk", 00:13:02.581 "block_size": 4096, 00:13:02.581 "num_blocks": 1310720, 00:13:02.581 "uuid": "8606aee9-08f9-4c64-9e0b-37624c45008c", 00:13:02.581 "assigned_rate_limits": { 00:13:02.581 "rw_ios_per_sec": 0, 00:13:02.581 "rw_mbytes_per_sec": 0, 00:13:02.581 "r_mbytes_per_sec": 0, 00:13:02.581 "w_mbytes_per_sec": 0 00:13:02.581 }, 00:13:02.581 "claimed": false, 00:13:02.581 "zoned": false, 00:13:02.581 "supported_io_types": { 00:13:02.581 "read": true, 00:13:02.581 "write": true, 00:13:02.581 "unmap": true, 00:13:02.581 "write_zeroes": true, 00:13:02.581 "flush": true, 00:13:02.581 "reset": true, 00:13:02.581 "compare": true, 00:13:02.581 "compare_and_write": false, 00:13:02.581 "abort": true, 00:13:02.581 "nvme_admin": true, 00:13:02.581 "nvme_io": true 00:13:02.581 }, 00:13:02.581 "driver_specific": { 00:13:02.581 "nvme": [ 00:13:02.581 { 00:13:02.581 "pci_address": "0000:00:11.0", 00:13:02.581 "trid": { 00:13:02.581 "trtype": "PCIe", 00:13:02.581 "traddr": "0000:00:11.0" 00:13:02.581 }, 00:13:02.581 "ctrlr_data": { 00:13:02.581 "cntlid": 0, 00:13:02.581 "vendor_id": "0x1b36", 00:13:02.581 "model_number": "QEMU NVMe Ctrl", 00:13:02.581 "serial_number": "12341", 00:13:02.581 "firmware_revision": "8.0.0", 00:13:02.581 "subnqn": "nqn.2019-08.org.qemu:12341", 00:13:02.581 "oacs": { 00:13:02.581 "security": 0, 00:13:02.581 "format": 1, 00:13:02.581 "firmware": 0, 00:13:02.581 "ns_manage": 1 00:13:02.581 }, 00:13:02.581 "multi_ctrlr": false, 00:13:02.581 "ana_reporting": false 00:13:02.581 }, 00:13:02.581 "vs": { 00:13:02.581 "nvme_version": "1.4" 00:13:02.581 }, 00:13:02.581 "ns_data": { 00:13:02.581 "id": 1, 00:13:02.581 "can_share": false 00:13:02.581 } 00:13:02.581 } 00:13:02.581 ], 00:13:02.581 "mp_policy": "active_passive" 00:13:02.581 } 00:13:02.581 } 00:13:02.581 ] 00:13:02.581 18:31:02 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.581 18:31:02 sw_hotplug -- common/autotest_common.sh@903 -- # return 0 00:13:02.581 18:31:02 sw_hotplug -- nvme/sw_hotplug.sh@108 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:13:02.581 18:31:02 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.581 18:31:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:02.581 18:31:02 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.581 18:31:02 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # debug_remove_attach_helper 3 6 true 00:13:02.581 18:31:02 sw_hotplug -- nvme/sw_hotplug.sh@14 -- # local helper_time=0 00:13:02.581 18:31:02 sw_hotplug -- nvme/sw_hotplug.sh@16 -- # timing_cmd remove_attach_helper 3 6 true 00:13:02.581 18:31:02 sw_hotplug -- common/autotest_common.sh@706 -- # [[ -t 0 ]] 00:13:02.581 18:31:02 sw_hotplug -- common/autotest_common.sh@706 -- # exec 00:13:02.581 18:31:02 sw_hotplug -- common/autotest_common.sh@708 -- # local time=0 TIMEFORMAT=%2R 00:13:02.581 18:31:02 sw_hotplug -- common/autotest_common.sh@714 -- # remove_attach_helper 3 6 true 00:13:02.581 18:31:02 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # local hotplug_events=3 00:13:02.581 18:31:02 sw_hotplug -- nvme/sw_hotplug.sh@23 -- # local hotplug_wait=6 00:13:02.581 18:31:02 sw_hotplug -- nvme/sw_hotplug.sh@24 -- # local use_bdev=true 00:13:02.581 18:31:02 sw_hotplug -- nvme/sw_hotplug.sh@25 -- # local dev bdfs 00:13:02.581 18:31:02 sw_hotplug -- nvme/sw_hotplug.sh@31 -- # sleep 6 00:13:09.144 18:31:08 sw_hotplug -- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- )) 00:13:09.144 18:31:08 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:13:09.144 18:31:08 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:13:09.144 18:31:08 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:13:09.144 18:31:08 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:13:09.144 18:31:08 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # true 00:13:09.144 18:31:08 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # sleep 6 00:13:09.144 [2024-07-23 18:31:08.649234] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:13:09.144 [2024-07-23 18:31:08.650813] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:09.144 [2024-07-23 18:31:08.650858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:09.144 [2024-07-23 18:31:08.650882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.144 [2024-07-23 18:31:08.650907] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:09.144 [2024-07-23 18:31:08.650920] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:09.144 [2024-07-23 18:31:08.650933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.144 [2024-07-23 18:31:08.650942] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:09.144 [2024-07-23 18:31:08.650956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:09.144 [2024-07-23 18:31:08.650964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.144 [2024-07-23 18:31:08.650975] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:09.144 [2024-07-23 18:31:08.650983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:09.144 [2024-07-23 18:31:08.650995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.144 [2024-07-23 18:31:09.048458] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:13:09.144 [2024-07-23 18:31:09.049892] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:09.144 [2024-07-23 18:31:09.049926] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:09.144 [2024-07-23 18:31:09.049943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.144 [2024-07-23 18:31:09.049958] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:09.144 [2024-07-23 18:31:09.049970] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:09.144 [2024-07-23 18:31:09.049979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.144 [2024-07-23 18:31:09.049990] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:09.144 [2024-07-23 18:31:09.049997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:09.144 [2024-07-23 18:31:09.050008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.144 [2024-07-23 18:31:09.050016] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:09.144 [2024-07-23 18:31:09.050029] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:09.144 [2024-07-23 18:31:09.050036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:15.717 18:31:14 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # rpc_cmd bdev_get_bdevs 00:13:15.717 18:31:14 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # jq length 00:13:15.717 18:31:14 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.717 18:31:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:15.717 18:31:14 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.717 18:31:14 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # (( 4 == 0 )) 00:13:15.717 18:31:14 sw_hotplug -- nvme/sw_hotplug.sh@41 -- # return 1 00:13:15.717 18:31:14 sw_hotplug -- common/autotest_common.sh@714 -- # trap - ERR 00:13:15.717 18:31:14 sw_hotplug -- common/autotest_common.sh@714 -- # print_backtrace 00:13:15.717 18:31:14 sw_hotplug -- common/autotest_common.sh@1149 -- # [[ hxBET =~ e ]] 00:13:15.717 18:31:14 sw_hotplug -- common/autotest_common.sh@1149 -- # return 0 00:13:15.717 18:31:14 sw_hotplug -- common/autotest_common.sh@714 -- # time=12.11 00:13:15.717 18:31:14 sw_hotplug -- common/autotest_common.sh@714 -- # trap - ERR 00:13:15.717 18:31:14 sw_hotplug -- common/autotest_common.sh@714 -- # print_backtrace 00:13:15.717 18:31:14 sw_hotplug -- common/autotest_common.sh@1149 -- # [[ hxBET =~ e ]] 00:13:15.717 18:31:14 sw_hotplug -- common/autotest_common.sh@1149 -- # return 0 00:13:15.717 18:31:14 sw_hotplug -- common/autotest_common.sh@716 -- # echo 12.11 00:13:15.717 18:31:14 sw_hotplug -- nvme/sw_hotplug.sh@16 -- # helper_time=12.11 00:13:15.717 18:31:14 sw_hotplug -- nvme/sw_hotplug.sh@17 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 12.11 2 00:13:15.717 remove_attach_helper took 12.11s to complete (handling 2 nvme drive(s)) 18:31:14 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:13:15.717 18:31:14 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.717 18:31:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:15.717 18:31:14 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.717 18:31:14 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:13:15.717 18:31:14 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.717 18:31:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:15.717 18:31:14 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.717 18:31:14 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # debug_remove_attach_helper 3 6 true 00:13:15.717 18:31:14 sw_hotplug -- nvme/sw_hotplug.sh@14 -- # local helper_time=0 00:13:15.717 18:31:14 sw_hotplug -- nvme/sw_hotplug.sh@16 -- # timing_cmd remove_attach_helper 3 6 true 00:13:15.717 18:31:14 sw_hotplug -- common/autotest_common.sh@706 -- # [[ -t 0 ]] 00:13:15.717 18:31:14 sw_hotplug -- common/autotest_common.sh@706 -- # exec 00:13:15.717 18:31:14 sw_hotplug -- common/autotest_common.sh@708 -- # local time=0 TIMEFORMAT=%2R 00:13:15.717 18:31:14 sw_hotplug -- common/autotest_common.sh@714 -- # remove_attach_helper 3 6 true 00:13:15.717 18:31:14 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # local hotplug_events=3 00:13:15.717 18:31:14 sw_hotplug -- nvme/sw_hotplug.sh@23 -- # local hotplug_wait=6 00:13:15.717 18:31:14 sw_hotplug -- nvme/sw_hotplug.sh@24 -- # local use_bdev=true 00:13:15.717 18:31:14 sw_hotplug -- nvme/sw_hotplug.sh@25 -- # local dev bdfs 00:13:15.717 18:31:14 sw_hotplug -- nvme/sw_hotplug.sh@31 -- # sleep 6 00:13:21.001 18:31:20 sw_hotplug -- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- )) 00:13:21.001 18:31:20 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:13:21.001 18:31:20 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:13:21.001 18:31:20 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # trap - ERR 00:13:21.001 18:31:20 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # print_backtrace 00:13:21.001 18:31:20 sw_hotplug -- common/autotest_common.sh@1149 -- # [[ hxBET =~ e ]] 00:13:21.001 18:31:20 sw_hotplug -- common/autotest_common.sh@1149 -- # return 0 00:13:21.001 18:31:20 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:13:21.001 18:31:20 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:13:21.001 18:31:20 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # true 00:13:21.001 18:31:20 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # sleep 6 00:13:27.579 18:31:26 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # rpc_cmd bdev_get_bdevs 00:13:27.579 18:31:26 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.579 18:31:26 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # jq length 00:13:27.579 18:31:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:27.579 18:31:26 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.579 18:31:26 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # (( 4 == 0 )) 00:13:27.579 18:31:26 sw_hotplug -- nvme/sw_hotplug.sh@41 -- # return 1 00:13:27.579 18:31:26 sw_hotplug -- common/autotest_common.sh@714 -- # time=12.06 00:13:27.579 18:31:26 sw_hotplug -- common/autotest_common.sh@714 -- # trap - ERR 00:13:27.579 18:31:26 sw_hotplug -- common/autotest_common.sh@714 -- # print_backtrace 00:13:27.579 18:31:26 sw_hotplug -- common/autotest_common.sh@1149 -- # [[ hxBET =~ e ]] 00:13:27.579 18:31:26 sw_hotplug -- common/autotest_common.sh@1149 -- # return 0 00:13:27.579 18:31:26 sw_hotplug -- common/autotest_common.sh@716 -- # echo 12.06 00:13:27.579 18:31:26 sw_hotplug -- nvme/sw_hotplug.sh@16 -- # helper_time=12.06 00:13:27.579 18:31:26 sw_hotplug -- nvme/sw_hotplug.sh@17 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 12.06 2 00:13:27.580 remove_attach_helper took 12.06s to complete (handling 2 nvme drive(s)) 18:31:26 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:13:27.580 18:31:26 sw_hotplug -- nvme/sw_hotplug.sh@118 -- # killprocess 84530 00:13:27.580 18:31:26 sw_hotplug -- common/autotest_common.sh@946 -- # '[' -z 84530 ']' 00:13:27.580 18:31:26 sw_hotplug -- common/autotest_common.sh@950 -- # kill -0 84530 00:13:27.580 18:31:26 sw_hotplug -- common/autotest_common.sh@951 -- # uname 00:13:27.580 18:31:26 sw_hotplug -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:27.580 18:31:26 sw_hotplug -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 84530 00:13:27.580 18:31:26 sw_hotplug -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:27.580 18:31:26 sw_hotplug -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:27.580 killing process with pid 84530 00:13:27.580 18:31:26 sw_hotplug -- common/autotest_common.sh@964 -- # echo 'killing process with pid 84530' 00:13:27.580 18:31:26 sw_hotplug -- common/autotest_common.sh@965 -- # kill 84530 00:13:27.580 18:31:26 sw_hotplug -- common/autotest_common.sh@970 -- # wait 84530 00:13:27.580 00:13:27.580 real 1m16.064s 00:13:27.580 user 0m46.035s 00:13:27.580 sys 0m12.992s 00:13:27.580 18:31:27 sw_hotplug -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:27.580 18:31:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:27.580 ************************************ 00:13:27.580 END TEST sw_hotplug 00:13:27.580 ************************************ 00:13:27.580 18:31:27 -- spdk/autotest.sh@247 -- # [[ 1 -eq 1 ]] 00:13:27.580 18:31:27 -- spdk/autotest.sh@248 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:13:27.580 18:31:27 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:13:27.580 18:31:27 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:27.580 18:31:27 -- common/autotest_common.sh@10 -- # set +x 00:13:27.580 ************************************ 00:13:27.580 START TEST nvme_xnvme 00:13:27.580 ************************************ 00:13:27.580 18:31:27 nvme_xnvme -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:13:27.580 * Looking for test storage... 00:13:27.580 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:13:27.580 18:31:27 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:27.840 18:31:27 nvme_xnvme -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:27.840 18:31:27 nvme_xnvme -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:27.840 18:31:27 nvme_xnvme -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:27.840 18:31:27 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.840 18:31:27 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.840 18:31:27 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.840 18:31:27 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:13:27.840 18:31:27 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.840 18:31:27 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:13:27.840 18:31:27 nvme_xnvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:13:27.840 18:31:27 nvme_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:27.840 18:31:27 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:27.840 ************************************ 00:13:27.840 START TEST xnvme_to_malloc_dd_copy 00:13:27.840 ************************************ 00:13:27.840 18:31:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1121 -- # malloc_to_xnvme_copy 00:13:27.840 18:31:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:13:27.840 18:31:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:13:27.840 18:31:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:13:27.840 18:31:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # return 00:13:27.840 18:31:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:13:27.840 18:31:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:13:27.840 18:31:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:13:27.840 18:31:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:13:27.840 18:31:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:13:27.840 18:31:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:13:27.840 18:31:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:13:27.840 18:31:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:13:27.840 18:31:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:13:27.840 18:31:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:13:27.840 18:31:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:13:27.840 18:31:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:13:27.840 18:31:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:13:27.840 18:31:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:13:27.840 18:31:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:13:27.840 18:31:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:13:27.840 18:31:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:13:27.840 18:31:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:13:27.840 18:31:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:13:27.840 18:31:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:27.840 { 00:13:27.840 "subsystems": [ 00:13:27.840 { 00:13:27.840 "subsystem": "bdev", 00:13:27.840 "config": [ 00:13:27.840 { 00:13:27.840 "params": { 00:13:27.840 "block_size": 512, 00:13:27.840 "num_blocks": 2097152, 00:13:27.840 "name": "malloc0" 00:13:27.840 }, 00:13:27.840 "method": "bdev_malloc_create" 00:13:27.840 }, 00:13:27.840 { 00:13:27.840 "params": { 00:13:27.840 "io_mechanism": "libaio", 00:13:27.840 "filename": "/dev/nullb0", 00:13:27.840 "name": "null0" 00:13:27.840 }, 00:13:27.840 "method": "bdev_xnvme_create" 00:13:27.840 }, 00:13:27.840 { 00:13:27.840 "method": "bdev_wait_for_examine" 00:13:27.840 } 00:13:27.840 ] 00:13:27.840 } 00:13:27.840 ] 00:13:27.840 } 00:13:27.840 [2024-07-23 18:31:27.757847] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:13:27.840 [2024-07-23 18:31:27.758009] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84883 ] 00:13:28.100 [2024-07-23 18:31:27.892856] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:28.100 [2024-07-23 18:31:27.960375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:33.310  Copying: 280/1024 [MB] (280 MBps) Copying: 561/1024 [MB] (281 MBps) Copying: 842/1024 [MB] (280 MBps) Copying: 1024/1024 [MB] (average 281 MBps) 00:13:33.310 00:13:33.310 18:31:33 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:13:33.310 18:31:33 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:13:33.310 18:31:33 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:13:33.310 18:31:33 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:33.310 { 00:13:33.310 "subsystems": [ 00:13:33.310 { 00:13:33.310 "subsystem": "bdev", 00:13:33.310 "config": [ 00:13:33.310 { 00:13:33.310 "params": { 00:13:33.310 "block_size": 512, 00:13:33.310 "num_blocks": 2097152, 00:13:33.310 "name": "malloc0" 00:13:33.310 }, 00:13:33.310 "method": "bdev_malloc_create" 00:13:33.310 }, 00:13:33.310 { 00:13:33.310 "params": { 00:13:33.310 "io_mechanism": "libaio", 00:13:33.310 "filename": "/dev/nullb0", 00:13:33.310 "name": "null0" 00:13:33.310 }, 00:13:33.310 "method": "bdev_xnvme_create" 00:13:33.310 }, 00:13:33.310 { 00:13:33.310 "method": "bdev_wait_for_examine" 00:13:33.310 } 00:13:33.310 ] 00:13:33.310 } 00:13:33.310 ] 00:13:33.310 } 00:13:33.310 [2024-07-23 18:31:33.152309] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:13:33.310 [2024-07-23 18:31:33.152471] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84950 ] 00:13:33.310 [2024-07-23 18:31:33.297602] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:33.570 [2024-07-23 18:31:33.365737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:38.464  Copying: 282/1024 [MB] (282 MBps) Copying: 564/1024 [MB] (281 MBps) Copying: 843/1024 [MB] (279 MBps) Copying: 1024/1024 [MB] (average 281 MBps) 00:13:38.464 00:13:38.464 18:31:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:13:38.464 18:31:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:13:38.464 18:31:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:13:38.464 18:31:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:13:38.464 18:31:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:13:38.464 18:31:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:38.723 { 00:13:38.723 "subsystems": [ 00:13:38.723 { 00:13:38.723 "subsystem": "bdev", 00:13:38.723 "config": [ 00:13:38.723 { 00:13:38.723 "params": { 00:13:38.723 "block_size": 512, 00:13:38.723 "num_blocks": 2097152, 00:13:38.723 "name": "malloc0" 00:13:38.723 }, 00:13:38.723 "method": "bdev_malloc_create" 00:13:38.723 }, 00:13:38.723 { 00:13:38.723 "params": { 00:13:38.723 "io_mechanism": "io_uring", 00:13:38.723 "filename": "/dev/nullb0", 00:13:38.723 "name": "null0" 00:13:38.723 }, 00:13:38.723 "method": "bdev_xnvme_create" 00:13:38.724 }, 00:13:38.724 { 00:13:38.724 "method": "bdev_wait_for_examine" 00:13:38.724 } 00:13:38.724 ] 00:13:38.724 } 00:13:38.724 ] 00:13:38.724 } 00:13:38.724 [2024-07-23 18:31:38.564876] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:13:38.724 [2024-07-23 18:31:38.564997] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85015 ] 00:13:38.724 [2024-07-23 18:31:38.711901] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:38.983 [2024-07-23 18:31:38.779820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:43.777  Copying: 286/1024 [MB] (286 MBps) Copying: 574/1024 [MB] (287 MBps) Copying: 860/1024 [MB] (286 MBps) Copying: 1024/1024 [MB] (average 287 MBps) 00:13:43.777 00:13:43.777 18:31:43 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:13:43.777 18:31:43 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:13:43.777 18:31:43 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:13:43.777 18:31:43 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:44.037 { 00:13:44.037 "subsystems": [ 00:13:44.037 { 00:13:44.037 "subsystem": "bdev", 00:13:44.037 "config": [ 00:13:44.037 { 00:13:44.037 "params": { 00:13:44.037 "block_size": 512, 00:13:44.037 "num_blocks": 2097152, 00:13:44.037 "name": "malloc0" 00:13:44.037 }, 00:13:44.037 "method": "bdev_malloc_create" 00:13:44.037 }, 00:13:44.037 { 00:13:44.037 "params": { 00:13:44.037 "io_mechanism": "io_uring", 00:13:44.037 "filename": "/dev/nullb0", 00:13:44.037 "name": "null0" 00:13:44.037 }, 00:13:44.037 "method": "bdev_xnvme_create" 00:13:44.037 }, 00:13:44.037 { 00:13:44.037 "method": "bdev_wait_for_examine" 00:13:44.037 } 00:13:44.037 ] 00:13:44.037 } 00:13:44.037 ] 00:13:44.037 } 00:13:44.037 [2024-07-23 18:31:43.867631] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:13:44.037 [2024-07-23 18:31:43.867775] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85081 ] 00:13:44.037 [2024-07-23 18:31:44.011954] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:44.037 [2024-07-23 18:31:44.080516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:49.023  Copying: 293/1024 [MB] (293 MBps) Copying: 587/1024 [MB] (294 MBps) Copying: 880/1024 [MB] (293 MBps) Copying: 1024/1024 [MB] (average 294 MBps) 00:13:49.023 00:13:49.023 18:31:49 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:13:49.023 18:31:49 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@195 -- # modprobe -r null_blk 00:13:49.023 00:13:49.023 real 0m21.413s 00:13:49.023 user 0m16.996s 00:13:49.023 sys 0m4.009s 00:13:49.023 18:31:49 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:49.023 ************************************ 00:13:49.023 END TEST xnvme_to_malloc_dd_copy 00:13:49.023 ************************************ 00:13:49.023 18:31:49 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:49.283 18:31:49 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:49.283 18:31:49 nvme_xnvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:13:49.283 18:31:49 nvme_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:49.283 18:31:49 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:49.283 ************************************ 00:13:49.283 START TEST xnvme_bdevperf 00:13:49.283 ************************************ 00:13:49.283 18:31:49 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1121 -- # xnvme_bdevperf 00:13:49.283 18:31:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:13:49.283 18:31:49 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:13:49.283 18:31:49 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:13:49.283 18:31:49 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # return 00:13:49.283 18:31:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:13:49.283 18:31:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:13:49.283 18:31:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:13:49.283 18:31:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:13:49.283 18:31:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:13:49.283 18:31:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:13:49.283 18:31:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:13:49.283 18:31:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:13:49.283 18:31:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:13:49.283 18:31:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:13:49.283 18:31:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:13:49.283 18:31:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:13:49.283 18:31:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:13:49.283 18:31:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:13:49.283 18:31:49 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:49.283 18:31:49 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:49.283 { 00:13:49.283 "subsystems": [ 00:13:49.283 { 00:13:49.283 "subsystem": "bdev", 00:13:49.283 "config": [ 00:13:49.283 { 00:13:49.283 "params": { 00:13:49.283 "io_mechanism": "libaio", 00:13:49.283 "filename": "/dev/nullb0", 00:13:49.283 "name": "null0" 00:13:49.283 }, 00:13:49.283 "method": "bdev_xnvme_create" 00:13:49.283 }, 00:13:49.283 { 00:13:49.283 "method": "bdev_wait_for_examine" 00:13:49.283 } 00:13:49.283 ] 00:13:49.283 } 00:13:49.283 ] 00:13:49.283 } 00:13:49.283 [2024-07-23 18:31:49.246813] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:13:49.283 [2024-07-23 18:31:49.246987] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85168 ] 00:13:49.543 [2024-07-23 18:31:49.391565] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:49.543 [2024-07-23 18:31:49.460282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:49.803 Running I/O for 5 seconds... 00:13:55.082 00:13:55.082 Latency(us) 00:13:55.082 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:55.082 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:55.082 null0 : 5.00 189203.10 739.07 0.00 0.00 335.91 118.05 457.89 00:13:55.082 =================================================================================================================== 00:13:55.082 Total : 189203.10 739.07 0.00 0.00 335.91 118.05 457.89 00:13:55.082 18:31:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:13:55.082 18:31:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:13:55.082 18:31:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:13:55.082 18:31:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:13:55.082 18:31:54 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:55.082 18:31:54 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:55.082 { 00:13:55.082 "subsystems": [ 00:13:55.082 { 00:13:55.082 "subsystem": "bdev", 00:13:55.082 "config": [ 00:13:55.082 { 00:13:55.082 "params": { 00:13:55.082 "io_mechanism": "io_uring", 00:13:55.082 "filename": "/dev/nullb0", 00:13:55.082 "name": "null0" 00:13:55.082 }, 00:13:55.082 "method": "bdev_xnvme_create" 00:13:55.082 }, 00:13:55.082 { 00:13:55.082 "method": "bdev_wait_for_examine" 00:13:55.082 } 00:13:55.082 ] 00:13:55.082 } 00:13:55.082 ] 00:13:55.082 } 00:13:55.082 [2024-07-23 18:31:55.028833] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:13:55.082 [2024-07-23 18:31:55.029393] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85236 ] 00:13:55.342 [2024-07-23 18:31:55.174062] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:55.342 [2024-07-23 18:31:55.241754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:55.342 Running I/O for 5 seconds... 00:14:00.629 00:14:00.629 Latency(us) 00:14:00.629 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:00.629 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:00.629 null0 : 5.00 229377.79 896.01 0.00 0.00 276.80 150.25 402.45 00:14:00.629 =================================================================================================================== 00:14:00.630 Total : 229377.79 896.01 0.00 0.00 276.80 150.25 402.45 00:14:00.903 18:32:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:14:00.903 18:32:00 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@195 -- # modprobe -r null_blk 00:14:00.903 00:14:00.903 real 0m11.621s 00:14:00.903 user 0m9.050s 00:14:00.903 sys 0m2.372s 00:14:00.903 18:32:00 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:00.903 ************************************ 00:14:00.903 END TEST xnvme_bdevperf 00:14:00.903 ************************************ 00:14:00.903 18:32:00 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:00.903 ************************************ 00:14:00.903 END TEST nvme_xnvme 00:14:00.903 ************************************ 00:14:00.903 00:14:00.903 real 0m33.293s 00:14:00.903 user 0m26.142s 00:14:00.903 sys 0m6.550s 00:14:00.903 18:32:00 nvme_xnvme -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:00.903 18:32:00 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:00.903 18:32:00 -- spdk/autotest.sh@249 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:14:00.903 18:32:00 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:00.903 18:32:00 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:00.903 18:32:00 -- common/autotest_common.sh@10 -- # set +x 00:14:00.903 ************************************ 00:14:00.903 START TEST blockdev_xnvme 00:14:00.903 ************************************ 00:14:00.903 18:32:00 blockdev_xnvme -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:14:01.163 * Looking for test storage... 00:14:01.163 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:14:01.163 18:32:00 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:14:01.163 18:32:00 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:14:01.163 18:32:00 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:14:01.163 18:32:00 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:01.163 18:32:00 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:14:01.163 18:32:00 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:14:01.163 18:32:00 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:14:01.163 18:32:00 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:14:01.163 18:32:00 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:14:01.163 18:32:00 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:14:01.163 18:32:00 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:14:01.163 18:32:00 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:14:01.163 18:32:00 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:14:01.163 18:32:00 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:14:01.163 18:32:00 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:14:01.163 18:32:01 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:14:01.163 18:32:01 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:14:01.163 18:32:01 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:14:01.163 18:32:01 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:14:01.163 18:32:01 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:14:01.163 18:32:01 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:14:01.163 18:32:01 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:14:01.163 18:32:01 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:14:01.163 18:32:01 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:14:01.163 18:32:01 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=85366 00:14:01.163 18:32:01 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:14:01.163 18:32:01 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:14:01.163 18:32:01 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 85366 00:14:01.163 18:32:01 blockdev_xnvme -- common/autotest_common.sh@827 -- # '[' -z 85366 ']' 00:14:01.163 18:32:01 blockdev_xnvme -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:01.163 18:32:01 blockdev_xnvme -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:01.163 18:32:01 blockdev_xnvme -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:01.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:01.163 18:32:01 blockdev_xnvme -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:01.163 18:32:01 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:01.163 [2024-07-23 18:32:01.093933] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:14:01.163 [2024-07-23 18:32:01.094144] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85366 ] 00:14:01.423 [2024-07-23 18:32:01.239738] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:01.423 [2024-07-23 18:32:01.307617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:01.991 18:32:01 blockdev_xnvme -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:01.991 18:32:01 blockdev_xnvme -- common/autotest_common.sh@860 -- # return 0 00:14:01.991 18:32:01 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:14:01.991 18:32:01 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:14:01.991 18:32:01 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:14:01.991 18:32:01 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:14:01.991 18:32:01 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:02.250 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:02.508 Waiting for block devices as requested 00:14:02.508 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:14:02.508 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:14:07.783 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:14:07.783 18:32:07 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:14:07.783 18:32:07 blockdev_xnvme -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:14:07.783 18:32:07 blockdev_xnvme -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:14:07.783 18:32:07 blockdev_xnvme -- common/autotest_common.sh@1666 -- # local nvme bdf 00:14:07.783 18:32:07 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:14:07.783 18:32:07 blockdev_xnvme -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:14:07.783 18:32:07 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:14:07.783 18:32:07 blockdev_xnvme -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:14:07.783 18:32:07 blockdev_xnvme -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:14:07.783 18:32:07 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:14:07.783 18:32:07 blockdev_xnvme -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n2 00:14:07.783 18:32:07 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local device=nvme0n2 00:14:07.783 18:32:07 blockdev_xnvme -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:14:07.783 18:32:07 blockdev_xnvme -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:14:07.783 18:32:07 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:14:07.783 18:32:07 blockdev_xnvme -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n3 00:14:07.783 18:32:07 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local device=nvme0n3 00:14:07.783 18:32:07 blockdev_xnvme -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:14:07.783 18:32:07 blockdev_xnvme -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:14:07.783 18:32:07 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:14:07.783 18:32:07 blockdev_xnvme -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1c1n1 00:14:07.783 18:32:07 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local device=nvme1c1n1 00:14:07.783 18:32:07 blockdev_xnvme -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1c1n1/queue/zoned ]] 00:14:07.783 18:32:07 blockdev_xnvme -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:14:07.783 18:32:07 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:14:07.783 18:32:07 blockdev_xnvme -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:14:07.783 18:32:07 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:14:07.783 18:32:07 blockdev_xnvme -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:14:07.783 18:32:07 blockdev_xnvme -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:14:07.783 18:32:07 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:14:07.783 18:32:07 blockdev_xnvme -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n1 00:14:07.783 18:32:07 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local device=nvme2n1 00:14:07.783 18:32:07 blockdev_xnvme -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:14:07.783 18:32:07 blockdev_xnvme -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:14:07.783 18:32:07 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:14:07.783 18:32:07 blockdev_xnvme -- common/autotest_common.sh@1669 -- # is_block_zoned nvme3n1 00:14:07.783 18:32:07 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local device=nvme3n1 00:14:07.783 18:32:07 blockdev_xnvme -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:14:07.783 18:32:07 blockdev_xnvme -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:14:07.783 18:32:07 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:07.783 18:32:07 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:14:07.783 18:32:07 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:07.783 18:32:07 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:07.783 18:32:07 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:07.783 18:32:07 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:14:07.783 18:32:07 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:07.783 18:32:07 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:07.783 18:32:07 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:07.783 18:32:07 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:14:07.783 18:32:07 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:07.783 18:32:07 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:07.783 18:32:07 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:07.783 18:32:07 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:14:07.783 18:32:07 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:07.783 18:32:07 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:07.783 18:32:07 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:07.783 18:32:07 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:14:07.783 18:32:07 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:07.783 18:32:07 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:07.783 18:32:07 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:07.783 18:32:07 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:14:07.783 18:32:07 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:07.783 18:32:07 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:07.783 18:32:07 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:14:07.783 18:32:07 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:14:07.783 18:32:07 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.783 18:32:07 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:07.783 18:32:07 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:14:07.783 nvme0n1 00:14:07.783 nvme0n2 00:14:07.783 nvme0n3 00:14:07.783 nvme1n1 00:14:07.783 nvme2n1 00:14:07.783 nvme3n1 00:14:07.783 18:32:07 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.783 18:32:07 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:14:07.783 18:32:07 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.783 18:32:07 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:07.783 18:32:07 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.783 18:32:07 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:14:07.783 18:32:07 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:14:07.783 18:32:07 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.783 18:32:07 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:07.783 18:32:07 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.783 18:32:07 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:14:07.783 18:32:07 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.783 18:32:07 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:07.783 18:32:07 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.783 18:32:07 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:14:07.783 18:32:07 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.783 18:32:07 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:07.783 18:32:07 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.783 18:32:07 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:14:07.783 18:32:07 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:14:07.783 18:32:07 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.783 18:32:07 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:14:07.783 18:32:07 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:07.783 18:32:07 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.783 18:32:07 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:14:07.784 18:32:07 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "31a9ed1a-5553-4ede-88ba-636a591febb2"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "31a9ed1a-5553-4ede-88ba-636a591febb2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "23a807d1-bba1-4989-b40f-3d66fb4f629e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "23a807d1-bba1-4989-b40f-3d66fb4f629e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "d898319a-f535-4453-a426-8d37243035c4"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d898319a-f535-4453-a426-8d37243035c4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "d332a229-7f2c-420b-9e66-9b8eb3eabb22"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "d332a229-7f2c-420b-9e66-9b8eb3eabb22",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "a5c4613a-f4f3-4ec7-a34b-0b9aad1c7d28"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "a5c4613a-f4f3-4ec7-a34b-0b9aad1c7d28",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "526a9576-6805-4649-997a-761a6a3f7139"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "526a9576-6805-4649-997a-761a6a3f7139",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' 00:14:07.784 18:32:07 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:14:07.784 18:32:07 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:14:07.784 18:32:07 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:14:07.784 18:32:07 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:14:07.784 18:32:07 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 85366 00:14:07.784 18:32:07 blockdev_xnvme -- common/autotest_common.sh@946 -- # '[' -z 85366 ']' 00:14:07.784 18:32:07 blockdev_xnvme -- common/autotest_common.sh@950 -- # kill -0 85366 00:14:07.784 18:32:07 blockdev_xnvme -- common/autotest_common.sh@951 -- # uname 00:14:07.784 18:32:07 blockdev_xnvme -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:08.043 18:32:07 blockdev_xnvme -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 85366 00:14:08.043 killing process with pid 85366 00:14:08.043 18:32:07 blockdev_xnvme -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:08.043 18:32:07 blockdev_xnvme -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:08.043 18:32:07 blockdev_xnvme -- common/autotest_common.sh@964 -- # echo 'killing process with pid 85366' 00:14:08.043 18:32:07 blockdev_xnvme -- common/autotest_common.sh@965 -- # kill 85366 00:14:08.043 18:32:07 blockdev_xnvme -- common/autotest_common.sh@970 -- # wait 85366 00:14:08.612 18:32:08 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:14:08.612 18:32:08 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:14:08.612 18:32:08 blockdev_xnvme -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:14:08.612 18:32:08 blockdev_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:08.612 18:32:08 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:08.612 ************************************ 00:14:08.612 START TEST bdev_hello_world 00:14:08.612 ************************************ 00:14:08.612 18:32:08 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:14:08.612 [2024-07-23 18:32:08.562121] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:14:08.612 [2024-07-23 18:32:08.562249] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85637 ] 00:14:08.872 [2024-07-23 18:32:08.707462] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:08.872 [2024-07-23 18:32:08.775016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:09.131 [2024-07-23 18:32:09.000528] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:14:09.131 [2024-07-23 18:32:09.000597] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:14:09.131 [2024-07-23 18:32:09.000613] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:14:09.131 [2024-07-23 18:32:09.002636] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:14:09.131 [2024-07-23 18:32:09.003028] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:14:09.131 [2024-07-23 18:32:09.003056] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:14:09.131 [2024-07-23 18:32:09.003263] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:14:09.131 00:14:09.131 [2024-07-23 18:32:09.003289] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:14:09.392 00:14:09.392 ************************************ 00:14:09.392 END TEST bdev_hello_world 00:14:09.392 ************************************ 00:14:09.392 real 0m0.864s 00:14:09.392 user 0m0.477s 00:14:09.392 sys 0m0.277s 00:14:09.392 18:32:09 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:09.392 18:32:09 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:14:09.392 18:32:09 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:14:09.392 18:32:09 blockdev_xnvme -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:09.392 18:32:09 blockdev_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:09.392 18:32:09 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:09.392 ************************************ 00:14:09.392 START TEST bdev_bounds 00:14:09.392 ************************************ 00:14:09.392 18:32:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1121 -- # bdev_bounds '' 00:14:09.392 Process bdevio pid: 85661 00:14:09.392 18:32:09 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=85661 00:14:09.392 18:32:09 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:14:09.392 18:32:09 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:14:09.392 18:32:09 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 85661' 00:14:09.392 18:32:09 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 85661 00:14:09.392 18:32:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@827 -- # '[' -z 85661 ']' 00:14:09.392 18:32:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:09.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:09.392 18:32:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:09.392 18:32:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:09.392 18:32:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:09.392 18:32:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:14:09.651 [2024-07-23 18:32:09.486662] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:14:09.651 [2024-07-23 18:32:09.486811] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85661 ] 00:14:09.651 [2024-07-23 18:32:09.635081] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:09.911 [2024-07-23 18:32:09.705356] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:09.911 [2024-07-23 18:32:09.705405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:09.911 [2024-07-23 18:32:09.705508] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:10.480 18:32:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:10.480 18:32:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@860 -- # return 0 00:14:10.480 18:32:10 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:14:10.480 I/O targets: 00:14:10.480 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:14:10.480 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:14:10.480 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:14:10.480 nvme1n1: 262144 blocks of 4096 bytes (1024 MiB) 00:14:10.480 nvme2n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:14:10.480 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:14:10.480 00:14:10.480 00:14:10.480 CUnit - A unit testing framework for C - Version 2.1-3 00:14:10.480 http://cunit.sourceforge.net/ 00:14:10.480 00:14:10.480 00:14:10.480 Suite: bdevio tests on: nvme3n1 00:14:10.480 Test: blockdev write read block ...passed 00:14:10.480 Test: blockdev write zeroes read block ...passed 00:14:10.480 Test: blockdev write zeroes read no split ...passed 00:14:10.480 Test: blockdev write zeroes read split ...passed 00:14:10.480 Test: blockdev write zeroes read split partial ...passed 00:14:10.480 Test: blockdev reset ...passed 00:14:10.480 Test: blockdev write read 8 blocks ...passed 00:14:10.480 Test: blockdev write read size > 128k ...passed 00:14:10.480 Test: blockdev write read invalid size ...passed 00:14:10.480 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:10.480 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:10.480 Test: blockdev write read max offset ...passed 00:14:10.480 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:10.480 Test: blockdev writev readv 8 blocks ...passed 00:14:10.480 Test: blockdev writev readv 30 x 1block ...passed 00:14:10.480 Test: blockdev writev readv block ...passed 00:14:10.480 Test: blockdev writev readv size > 128k ...passed 00:14:10.480 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:10.480 Test: blockdev comparev and writev ...passed 00:14:10.480 Test: blockdev nvme passthru rw ...passed 00:14:10.480 Test: blockdev nvme passthru vendor specific ...passed 00:14:10.480 Test: blockdev nvme admin passthru ...passed 00:14:10.480 Test: blockdev copy ...passed 00:14:10.480 Suite: bdevio tests on: nvme2n1 00:14:10.480 Test: blockdev write read block ...passed 00:14:10.480 Test: blockdev write zeroes read block ...passed 00:14:10.480 Test: blockdev write zeroes read no split ...passed 00:14:10.480 Test: blockdev write zeroes read split ...passed 00:14:10.480 Test: blockdev write zeroes read split partial ...passed 00:14:10.480 Test: blockdev reset ...passed 00:14:10.480 Test: blockdev write read 8 blocks ...passed 00:14:10.480 Test: blockdev write read size > 128k ...passed 00:14:10.480 Test: blockdev write read invalid size ...passed 00:14:10.480 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:10.480 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:10.480 Test: blockdev write read max offset ...passed 00:14:10.480 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:10.480 Test: blockdev writev readv 8 blocks ...passed 00:14:10.480 Test: blockdev writev readv 30 x 1block ...passed 00:14:10.480 Test: blockdev writev readv block ...passed 00:14:10.480 Test: blockdev writev readv size > 128k ...passed 00:14:10.480 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:10.480 Test: blockdev comparev and writev ...passed 00:14:10.480 Test: blockdev nvme passthru rw ...passed 00:14:10.480 Test: blockdev nvme passthru vendor specific ...passed 00:14:10.480 Test: blockdev nvme admin passthru ...passed 00:14:10.480 Test: blockdev copy ...passed 00:14:10.480 Suite: bdevio tests on: nvme1n1 00:14:10.480 Test: blockdev write read block ...passed 00:14:10.480 Test: blockdev write zeroes read block ...passed 00:14:10.480 Test: blockdev write zeroes read no split ...passed 00:14:10.480 Test: blockdev write zeroes read split ...passed 00:14:10.480 Test: blockdev write zeroes read split partial ...passed 00:14:10.480 Test: blockdev reset ...passed 00:14:10.480 Test: blockdev write read 8 blocks ...passed 00:14:10.480 Test: blockdev write read size > 128k ...passed 00:14:10.480 Test: blockdev write read invalid size ...passed 00:14:10.480 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:10.480 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:10.480 Test: blockdev write read max offset ...passed 00:14:10.480 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:10.480 Test: blockdev writev readv 8 blocks ...passed 00:14:10.480 Test: blockdev writev readv 30 x 1block ...passed 00:14:10.480 Test: blockdev writev readv block ...passed 00:14:10.480 Test: blockdev writev readv size > 128k ...passed 00:14:10.480 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:10.480 Test: blockdev comparev and writev ...passed 00:14:10.480 Test: blockdev nvme passthru rw ...passed 00:14:10.480 Test: blockdev nvme passthru vendor specific ...passed 00:14:10.480 Test: blockdev nvme admin passthru ...passed 00:14:10.480 Test: blockdev copy ...passed 00:14:10.480 Suite: bdevio tests on: nvme0n3 00:14:10.480 Test: blockdev write read block ...passed 00:14:10.480 Test: blockdev write zeroes read block ...passed 00:14:10.480 Test: blockdev write zeroes read no split ...passed 00:14:10.480 Test: blockdev write zeroes read split ...passed 00:14:10.480 Test: blockdev write zeroes read split partial ...passed 00:14:10.480 Test: blockdev reset ...passed 00:14:10.480 Test: blockdev write read 8 blocks ...passed 00:14:10.480 Test: blockdev write read size > 128k ...passed 00:14:10.480 Test: blockdev write read invalid size ...passed 00:14:10.480 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:10.480 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:10.480 Test: blockdev write read max offset ...passed 00:14:10.480 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:10.480 Test: blockdev writev readv 8 blocks ...passed 00:14:10.480 Test: blockdev writev readv 30 x 1block ...passed 00:14:10.480 Test: blockdev writev readv block ...passed 00:14:10.481 Test: blockdev writev readv size > 128k ...passed 00:14:10.481 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:10.481 Test: blockdev comparev and writev ...passed 00:14:10.481 Test: blockdev nvme passthru rw ...passed 00:14:10.481 Test: blockdev nvme passthru vendor specific ...passed 00:14:10.481 Test: blockdev nvme admin passthru ...passed 00:14:10.481 Test: blockdev copy ...passed 00:14:10.481 Suite: bdevio tests on: nvme0n2 00:14:10.481 Test: blockdev write read block ...passed 00:14:10.481 Test: blockdev write zeroes read block ...passed 00:14:10.481 Test: blockdev write zeroes read no split ...passed 00:14:10.481 Test: blockdev write zeroes read split ...passed 00:14:10.481 Test: blockdev write zeroes read split partial ...passed 00:14:10.481 Test: blockdev reset ...passed 00:14:10.481 Test: blockdev write read 8 blocks ...passed 00:14:10.481 Test: blockdev write read size > 128k ...passed 00:14:10.481 Test: blockdev write read invalid size ...passed 00:14:10.481 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:10.481 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:10.481 Test: blockdev write read max offset ...passed 00:14:10.481 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:10.481 Test: blockdev writev readv 8 blocks ...passed 00:14:10.481 Test: blockdev writev readv 30 x 1block ...passed 00:14:10.481 Test: blockdev writev readv block ...passed 00:14:10.481 Test: blockdev writev readv size > 128k ...passed 00:14:10.481 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:10.481 Test: blockdev comparev and writev ...passed 00:14:10.481 Test: blockdev nvme passthru rw ...passed 00:14:10.481 Test: blockdev nvme passthru vendor specific ...passed 00:14:10.481 Test: blockdev nvme admin passthru ...passed 00:14:10.481 Test: blockdev copy ...passed 00:14:10.481 Suite: bdevio tests on: nvme0n1 00:14:10.481 Test: blockdev write read block ...passed 00:14:10.481 Test: blockdev write zeroes read block ...passed 00:14:10.481 Test: blockdev write zeroes read no split ...passed 00:14:10.481 Test: blockdev write zeroes read split ...passed 00:14:10.481 Test: blockdev write zeroes read split partial ...passed 00:14:10.481 Test: blockdev reset ...passed 00:14:10.481 Test: blockdev write read 8 blocks ...passed 00:14:10.481 Test: blockdev write read size > 128k ...passed 00:14:10.481 Test: blockdev write read invalid size ...passed 00:14:10.481 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:10.481 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:10.481 Test: blockdev write read max offset ...passed 00:14:10.481 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:10.481 Test: blockdev writev readv 8 blocks ...passed 00:14:10.481 Test: blockdev writev readv 30 x 1block ...passed 00:14:10.481 Test: blockdev writev readv block ...passed 00:14:10.481 Test: blockdev writev readv size > 128k ...passed 00:14:10.481 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:10.481 Test: blockdev comparev and writev ...passed 00:14:10.481 Test: blockdev nvme passthru rw ...passed 00:14:10.481 Test: blockdev nvme passthru vendor specific ...passed 00:14:10.481 Test: blockdev nvme admin passthru ...passed 00:14:10.481 Test: blockdev copy ...passed 00:14:10.481 00:14:10.481 Run Summary: Type Total Ran Passed Failed Inactive 00:14:10.481 suites 6 6 n/a 0 0 00:14:10.481 tests 138 138 138 0 0 00:14:10.481 asserts 780 780 780 0 n/a 00:14:10.481 00:14:10.481 Elapsed time = 0.429 seconds 00:14:10.481 0 00:14:10.740 18:32:10 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 85661 00:14:10.741 18:32:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@946 -- # '[' -z 85661 ']' 00:14:10.741 18:32:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@950 -- # kill -0 85661 00:14:10.741 18:32:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@951 -- # uname 00:14:10.741 18:32:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:10.741 18:32:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 85661 00:14:10.741 killing process with pid 85661 00:14:10.741 18:32:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:10.741 18:32:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:10.741 18:32:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # echo 'killing process with pid 85661' 00:14:10.741 18:32:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@965 -- # kill 85661 00:14:10.741 18:32:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@970 -- # wait 85661 00:14:11.001 ************************************ 00:14:11.001 END TEST bdev_bounds 00:14:11.001 ************************************ 00:14:11.001 18:32:10 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:14:11.001 00:14:11.001 real 0m1.527s 00:14:11.001 user 0m3.471s 00:14:11.001 sys 0m0.404s 00:14:11.001 18:32:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:11.001 18:32:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:14:11.001 18:32:10 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:14:11.001 18:32:10 blockdev_xnvme -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:14:11.001 18:32:10 blockdev_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:11.001 18:32:10 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:11.001 ************************************ 00:14:11.001 START TEST bdev_nbd 00:14:11.001 ************************************ 00:14:11.001 18:32:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1121 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:14:11.001 18:32:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:14:11.001 18:32:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:14:11.001 18:32:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:11.001 18:32:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:11.001 18:32:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:14:11.001 18:32:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:14:11.001 18:32:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:14:11.001 18:32:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:14:11.001 18:32:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:14:11.001 18:32:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:14:11.001 18:32:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:14:11.001 18:32:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:11.001 18:32:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:14:11.001 18:32:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:14:11.001 18:32:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:14:11.001 18:32:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=85711 00:14:11.001 18:32:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:14:11.001 18:32:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:14:11.001 18:32:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 85711 /var/tmp/spdk-nbd.sock 00:14:11.001 18:32:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@827 -- # '[' -z 85711 ']' 00:14:11.001 18:32:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:14:11.001 18:32:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:11.001 18:32:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:14:11.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:14:11.001 18:32:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:11.001 18:32:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:14:11.260 [2024-07-23 18:32:11.080910] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:14:11.260 [2024-07-23 18:32:11.081041] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:11.260 [2024-07-23 18:32:11.229015] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:11.260 [2024-07-23 18:32:11.296676] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:11.830 18:32:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:11.830 18:32:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@860 -- # return 0 00:14:11.830 18:32:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:14:11.830 18:32:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:11.830 18:32:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:14:11.830 18:32:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:14:11.830 18:32:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:14:11.830 18:32:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:11.830 18:32:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:14:11.830 18:32:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:14:11.830 18:32:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:14:11.830 18:32:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:14:11.830 18:32:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:14:11.830 18:32:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:11.830 18:32:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:14:12.089 18:32:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:14:12.089 18:32:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:14:12.089 18:32:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:14:12.089 18:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:14:12.089 18:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:14:12.089 18:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:14:12.089 18:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:14:12.089 18:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:14:12.089 18:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:14:12.089 18:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:14:12.089 18:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:14:12.089 18:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:12.089 1+0 records in 00:14:12.089 1+0 records out 00:14:12.089 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00098563 s, 4.2 MB/s 00:14:12.089 18:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:12.089 18:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:14:12.089 18:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:12.089 18:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:14:12.089 18:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:14:12.089 18:32:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:12.089 18:32:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:12.089 18:32:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:14:12.349 18:32:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:14:12.349 18:32:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:14:12.349 18:32:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:14:12.349 18:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:14:12.349 18:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:14:12.349 18:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:14:12.349 18:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:14:12.349 18:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:14:12.349 18:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:14:12.349 18:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:14:12.349 18:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:14:12.349 18:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:12.349 1+0 records in 00:14:12.349 1+0 records out 00:14:12.349 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0006229 s, 6.6 MB/s 00:14:12.349 18:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:12.349 18:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:14:12.349 18:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:12.349 18:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:14:12.349 18:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:14:12.349 18:32:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:12.349 18:32:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:12.349 18:32:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:14:12.609 18:32:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:14:12.609 18:32:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:14:12.609 18:32:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:14:12.609 18:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd2 00:14:12.609 18:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:14:12.609 18:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:14:12.609 18:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:14:12.609 18:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd2 /proc/partitions 00:14:12.609 18:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:14:12.609 18:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:14:12.609 18:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:14:12.609 18:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:12.609 1+0 records in 00:14:12.609 1+0 records out 00:14:12.609 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000850327 s, 4.8 MB/s 00:14:12.609 18:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:12.609 18:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:14:12.609 18:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:12.609 18:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:14:12.609 18:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:14:12.609 18:32:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:12.609 18:32:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:12.609 18:32:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:14:12.868 18:32:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:14:12.868 18:32:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:14:12.868 18:32:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:14:12.868 18:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd3 00:14:12.868 18:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:14:12.868 18:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:14:12.868 18:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:14:12.868 18:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd3 /proc/partitions 00:14:12.868 18:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:14:12.868 18:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:14:12.868 18:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:14:12.868 18:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:12.868 1+0 records in 00:14:12.868 1+0 records out 00:14:12.868 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000769767 s, 5.3 MB/s 00:14:12.868 18:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:12.868 18:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:14:12.868 18:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:12.868 18:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:14:12.868 18:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:14:12.868 18:32:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:12.868 18:32:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:12.868 18:32:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:14:12.868 18:32:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:14:12.868 18:32:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:14:13.129 18:32:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:14:13.129 18:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd4 00:14:13.129 18:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:14:13.129 18:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:14:13.129 18:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:14:13.129 18:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd4 /proc/partitions 00:14:13.129 18:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:14:13.129 18:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:14:13.129 18:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:14:13.129 18:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:13.129 1+0 records in 00:14:13.129 1+0 records out 00:14:13.129 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000761144 s, 5.4 MB/s 00:14:13.129 18:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:13.129 18:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:14:13.129 18:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:13.129 18:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:14:13.129 18:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:14:13.129 18:32:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:13.129 18:32:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:13.129 18:32:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:14:13.129 18:32:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:14:13.129 18:32:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:14:13.129 18:32:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:14:13.129 18:32:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd5 00:14:13.129 18:32:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:14:13.129 18:32:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:14:13.129 18:32:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:14:13.129 18:32:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd5 /proc/partitions 00:14:13.129 18:32:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:14:13.129 18:32:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:14:13.129 18:32:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:14:13.129 18:32:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:13.129 1+0 records in 00:14:13.129 1+0 records out 00:14:13.129 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00092144 s, 4.4 MB/s 00:14:13.129 18:32:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:13.129 18:32:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:14:13.129 18:32:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:13.129 18:32:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:14:13.129 18:32:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:14:13.129 18:32:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:13.129 18:32:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:13.129 18:32:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:13.388 18:32:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:14:13.388 { 00:14:13.388 "nbd_device": "/dev/nbd0", 00:14:13.388 "bdev_name": "nvme0n1" 00:14:13.388 }, 00:14:13.388 { 00:14:13.388 "nbd_device": "/dev/nbd1", 00:14:13.388 "bdev_name": "nvme0n2" 00:14:13.388 }, 00:14:13.388 { 00:14:13.388 "nbd_device": "/dev/nbd2", 00:14:13.388 "bdev_name": "nvme0n3" 00:14:13.388 }, 00:14:13.388 { 00:14:13.388 "nbd_device": "/dev/nbd3", 00:14:13.388 "bdev_name": "nvme1n1" 00:14:13.388 }, 00:14:13.388 { 00:14:13.388 "nbd_device": "/dev/nbd4", 00:14:13.388 "bdev_name": "nvme2n1" 00:14:13.388 }, 00:14:13.388 { 00:14:13.388 "nbd_device": "/dev/nbd5", 00:14:13.388 "bdev_name": "nvme3n1" 00:14:13.388 } 00:14:13.388 ]' 00:14:13.388 18:32:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:14:13.388 18:32:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:14:13.388 { 00:14:13.388 "nbd_device": "/dev/nbd0", 00:14:13.388 "bdev_name": "nvme0n1" 00:14:13.388 }, 00:14:13.388 { 00:14:13.388 "nbd_device": "/dev/nbd1", 00:14:13.388 "bdev_name": "nvme0n2" 00:14:13.388 }, 00:14:13.388 { 00:14:13.388 "nbd_device": "/dev/nbd2", 00:14:13.388 "bdev_name": "nvme0n3" 00:14:13.388 }, 00:14:13.388 { 00:14:13.388 "nbd_device": "/dev/nbd3", 00:14:13.388 "bdev_name": "nvme1n1" 00:14:13.388 }, 00:14:13.388 { 00:14:13.388 "nbd_device": "/dev/nbd4", 00:14:13.388 "bdev_name": "nvme2n1" 00:14:13.388 }, 00:14:13.388 { 00:14:13.388 "nbd_device": "/dev/nbd5", 00:14:13.388 "bdev_name": "nvme3n1" 00:14:13.388 } 00:14:13.388 ]' 00:14:13.388 18:32:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:14:13.647 18:32:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:14:13.647 18:32:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:13.647 18:32:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:14:13.647 18:32:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:13.647 18:32:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:14:13.647 18:32:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:13.647 18:32:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:13.647 18:32:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:13.647 18:32:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:13.647 18:32:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:13.647 18:32:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:13.647 18:32:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:13.647 18:32:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:13.647 18:32:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:13.647 18:32:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:13.647 18:32:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:13.647 18:32:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:14:13.906 18:32:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:13.906 18:32:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:13.906 18:32:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:13.906 18:32:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:13.906 18:32:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:13.906 18:32:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:13.906 18:32:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:13.906 18:32:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:13.906 18:32:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:13.906 18:32:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:14:14.166 18:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:14:14.166 18:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:14:14.166 18:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:14:14.166 18:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:14.166 18:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:14.166 18:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:14:14.166 18:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:14.166 18:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:14.166 18:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:14.166 18:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:14:14.166 18:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:14:14.166 18:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:14:14.166 18:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:14:14.166 18:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:14.166 18:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:14.166 18:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:14:14.166 18:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:14.425 18:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:14.425 18:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:14.425 18:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:14:14.425 18:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:14:14.425 18:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:14:14.425 18:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:14:14.425 18:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:14.425 18:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:14.425 18:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:14:14.425 18:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:14.425 18:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:14.425 18:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:14.425 18:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:14:14.685 18:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:14:14.685 18:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:14:14.685 18:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:14:14.685 18:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:14.685 18:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:14.685 18:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:14:14.685 18:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:14.685 18:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:14.685 18:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:14.685 18:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:14.685 18:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:14.945 18:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:14.945 18:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:14.945 18:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:14.945 18:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:14.945 18:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:14.945 18:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:14:14.945 18:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:14:14.945 18:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:14:14.945 18:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:14:14.945 18:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:14:14.945 18:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:14:14.945 18:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:14:14.945 18:32:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:14:14.945 18:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:14.945 18:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:14:14.945 18:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:14:14.945 18:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:14.945 18:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:14:14.945 18:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:14:14.945 18:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:14.945 18:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:14:14.945 18:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:14.945 18:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:14.945 18:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:14.945 18:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:14:14.945 18:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:14.945 18:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:14.945 18:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:14:14.945 /dev/nbd0 00:14:14.945 18:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:14.945 18:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:14.945 18:32:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:14:14.945 18:32:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:14:14.945 18:32:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:14:14.945 18:32:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:14:14.945 18:32:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:14:14.945 18:32:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:14:14.945 18:32:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:14:14.945 18:32:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:14:14.945 18:32:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:15.205 1+0 records in 00:14:15.205 1+0 records out 00:14:15.205 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000700186 s, 5.8 MB/s 00:14:15.205 18:32:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:15.205 18:32:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:14:15.205 18:32:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:15.205 18:32:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:14:15.205 18:32:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:14:15.205 18:32:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:15.205 18:32:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:15.205 18:32:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:14:15.205 /dev/nbd1 00:14:15.205 18:32:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:15.205 18:32:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:15.205 18:32:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:14:15.205 18:32:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:14:15.205 18:32:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:14:15.205 18:32:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:14:15.205 18:32:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:14:15.205 18:32:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:14:15.205 18:32:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:14:15.205 18:32:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:14:15.205 18:32:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:15.205 1+0 records in 00:14:15.205 1+0 records out 00:14:15.205 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000598283 s, 6.8 MB/s 00:14:15.205 18:32:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:15.205 18:32:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:14:15.205 18:32:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:15.205 18:32:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:14:15.205 18:32:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:14:15.205 18:32:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:15.205 18:32:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:15.205 18:32:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:14:15.465 /dev/nbd10 00:14:15.465 18:32:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:14:15.465 18:32:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:14:15.465 18:32:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd10 00:14:15.465 18:32:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:14:15.465 18:32:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:14:15.465 18:32:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:14:15.465 18:32:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd10 /proc/partitions 00:14:15.465 18:32:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:14:15.465 18:32:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:14:15.465 18:32:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:14:15.465 18:32:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:15.465 1+0 records in 00:14:15.465 1+0 records out 00:14:15.465 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000860546 s, 4.8 MB/s 00:14:15.465 18:32:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:15.465 18:32:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:14:15.465 18:32:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:15.465 18:32:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:14:15.465 18:32:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:14:15.465 18:32:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:15.465 18:32:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:15.465 18:32:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:14:15.723 /dev/nbd11 00:14:15.723 18:32:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:14:15.723 18:32:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:14:15.723 18:32:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd11 00:14:15.724 18:32:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:14:15.724 18:32:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:14:15.724 18:32:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:14:15.724 18:32:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd11 /proc/partitions 00:14:15.724 18:32:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:14:15.724 18:32:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:14:15.724 18:32:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:14:15.724 18:32:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:15.724 1+0 records in 00:14:15.724 1+0 records out 00:14:15.724 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000680113 s, 6.0 MB/s 00:14:15.724 18:32:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:15.724 18:32:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:14:15.724 18:32:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:15.724 18:32:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:14:15.724 18:32:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:14:15.724 18:32:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:15.724 18:32:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:15.724 18:32:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:14:15.982 /dev/nbd12 00:14:15.982 18:32:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:14:15.982 18:32:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:14:15.982 18:32:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd12 00:14:15.982 18:32:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:14:15.982 18:32:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:14:15.982 18:32:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:14:15.982 18:32:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd12 /proc/partitions 00:14:15.982 18:32:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:14:15.982 18:32:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:14:15.982 18:32:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:14:15.983 18:32:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:15.983 1+0 records in 00:14:15.983 1+0 records out 00:14:15.983 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00080663 s, 5.1 MB/s 00:14:15.983 18:32:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:15.983 18:32:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:14:15.983 18:32:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:15.983 18:32:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:14:15.983 18:32:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:14:15.983 18:32:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:15.983 18:32:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:15.983 18:32:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:14:16.241 /dev/nbd13 00:14:16.241 18:32:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:14:16.241 18:32:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:14:16.242 18:32:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd13 00:14:16.242 18:32:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:14:16.242 18:32:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:14:16.242 18:32:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:14:16.242 18:32:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd13 /proc/partitions 00:14:16.242 18:32:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:14:16.242 18:32:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:14:16.242 18:32:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:14:16.242 18:32:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:16.242 1+0 records in 00:14:16.242 1+0 records out 00:14:16.242 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00065105 s, 6.3 MB/s 00:14:16.242 18:32:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:16.242 18:32:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:14:16.242 18:32:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:16.242 18:32:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:14:16.242 18:32:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:14:16.242 18:32:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:16.242 18:32:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:16.242 18:32:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:16.242 18:32:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:16.242 18:32:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:16.242 18:32:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:14:16.242 { 00:14:16.242 "nbd_device": "/dev/nbd0", 00:14:16.242 "bdev_name": "nvme0n1" 00:14:16.242 }, 00:14:16.242 { 00:14:16.242 "nbd_device": "/dev/nbd1", 00:14:16.242 "bdev_name": "nvme0n2" 00:14:16.242 }, 00:14:16.242 { 00:14:16.242 "nbd_device": "/dev/nbd10", 00:14:16.242 "bdev_name": "nvme0n3" 00:14:16.242 }, 00:14:16.242 { 00:14:16.242 "nbd_device": "/dev/nbd11", 00:14:16.242 "bdev_name": "nvme1n1" 00:14:16.242 }, 00:14:16.242 { 00:14:16.242 "nbd_device": "/dev/nbd12", 00:14:16.242 "bdev_name": "nvme2n1" 00:14:16.242 }, 00:14:16.242 { 00:14:16.242 "nbd_device": "/dev/nbd13", 00:14:16.242 "bdev_name": "nvme3n1" 00:14:16.242 } 00:14:16.242 ]' 00:14:16.242 18:32:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:16.242 18:32:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:14:16.242 { 00:14:16.242 "nbd_device": "/dev/nbd0", 00:14:16.242 "bdev_name": "nvme0n1" 00:14:16.242 }, 00:14:16.242 { 00:14:16.242 "nbd_device": "/dev/nbd1", 00:14:16.242 "bdev_name": "nvme0n2" 00:14:16.242 }, 00:14:16.242 { 00:14:16.242 "nbd_device": "/dev/nbd10", 00:14:16.242 "bdev_name": "nvme0n3" 00:14:16.242 }, 00:14:16.242 { 00:14:16.242 "nbd_device": "/dev/nbd11", 00:14:16.242 "bdev_name": "nvme1n1" 00:14:16.242 }, 00:14:16.242 { 00:14:16.242 "nbd_device": "/dev/nbd12", 00:14:16.242 "bdev_name": "nvme2n1" 00:14:16.242 }, 00:14:16.242 { 00:14:16.242 "nbd_device": "/dev/nbd13", 00:14:16.242 "bdev_name": "nvme3n1" 00:14:16.242 } 00:14:16.242 ]' 00:14:16.501 18:32:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:14:16.501 /dev/nbd1 00:14:16.501 /dev/nbd10 00:14:16.501 /dev/nbd11 00:14:16.501 /dev/nbd12 00:14:16.501 /dev/nbd13' 00:14:16.501 18:32:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:14:16.501 /dev/nbd1 00:14:16.501 /dev/nbd10 00:14:16.501 /dev/nbd11 00:14:16.501 /dev/nbd12 00:14:16.501 /dev/nbd13' 00:14:16.501 18:32:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:16.501 18:32:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:14:16.501 18:32:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:14:16.501 18:32:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:14:16.501 18:32:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:14:16.501 18:32:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:14:16.501 18:32:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:16.501 18:32:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:16.502 18:32:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:14:16.502 18:32:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:16.502 18:32:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:14:16.502 18:32:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:14:16.502 256+0 records in 00:14:16.502 256+0 records out 00:14:16.502 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0139747 s, 75.0 MB/s 00:14:16.502 18:32:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:16.502 18:32:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:14:16.502 256+0 records in 00:14:16.502 256+0 records out 00:14:16.502 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0896597 s, 11.7 MB/s 00:14:16.502 18:32:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:16.502 18:32:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:14:16.502 256+0 records in 00:14:16.502 256+0 records out 00:14:16.502 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0895554 s, 11.7 MB/s 00:14:16.502 18:32:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:16.502 18:32:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:14:16.769 256+0 records in 00:14:16.769 256+0 records out 00:14:16.769 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0869943 s, 12.1 MB/s 00:14:16.769 18:32:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:16.769 18:32:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:14:16.769 256+0 records in 00:14:16.769 256+0 records out 00:14:16.769 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0900034 s, 11.7 MB/s 00:14:16.769 18:32:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:16.769 18:32:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:14:17.045 256+0 records in 00:14:17.045 256+0 records out 00:14:17.045 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.111479 s, 9.4 MB/s 00:14:17.045 18:32:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:17.045 18:32:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:14:17.045 256+0 records in 00:14:17.045 256+0 records out 00:14:17.045 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.089634 s, 11.7 MB/s 00:14:17.045 18:32:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:14:17.045 18:32:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:17.045 18:32:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:17.045 18:32:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:14:17.045 18:32:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:17.045 18:32:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:14:17.046 18:32:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:14:17.046 18:32:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:17.046 18:32:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:14:17.046 18:32:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:17.046 18:32:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:14:17.046 18:32:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:17.046 18:32:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:14:17.046 18:32:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:17.046 18:32:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:14:17.046 18:32:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:17.046 18:32:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:14:17.046 18:32:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:17.046 18:32:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:14:17.046 18:32:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:17.046 18:32:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:14:17.046 18:32:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:17.046 18:32:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:17.046 18:32:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:17.046 18:32:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:14:17.046 18:32:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:17.046 18:32:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:17.316 18:32:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:17.316 18:32:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:17.316 18:32:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:17.316 18:32:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:17.316 18:32:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:17.316 18:32:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:17.316 18:32:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:17.316 18:32:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:17.316 18:32:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:17.316 18:32:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:14:17.316 18:32:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:17.316 18:32:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:17.316 18:32:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:17.316 18:32:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:17.316 18:32:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:17.316 18:32:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:17.316 18:32:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:17.316 18:32:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:17.316 18:32:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:17.316 18:32:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:14:17.575 18:32:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:14:17.575 18:32:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:14:17.575 18:32:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:14:17.575 18:32:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:17.575 18:32:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:17.575 18:32:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:14:17.575 18:32:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:17.575 18:32:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:17.575 18:32:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:17.575 18:32:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:14:17.834 18:32:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:14:17.834 18:32:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:14:17.834 18:32:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:14:17.834 18:32:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:17.834 18:32:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:17.834 18:32:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:14:17.834 18:32:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:17.834 18:32:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:17.834 18:32:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:17.834 18:32:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:14:18.092 18:32:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:14:18.092 18:32:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:14:18.092 18:32:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:14:18.092 18:32:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:18.092 18:32:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:18.092 18:32:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:14:18.092 18:32:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:18.092 18:32:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:18.092 18:32:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:18.093 18:32:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:14:18.352 18:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:14:18.352 18:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:14:18.352 18:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:14:18.352 18:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:18.352 18:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:18.352 18:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:14:18.352 18:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:18.352 18:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:18.352 18:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:18.352 18:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:18.352 18:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:18.352 18:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:18.352 18:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:18.352 18:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:18.611 18:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:18.611 18:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:14:18.611 18:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:18.611 18:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:14:18.612 18:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:14:18.612 18:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:14:18.612 18:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:14:18.612 18:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:14:18.612 18:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:14:18.612 18:32:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:14:18.612 18:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:18.612 18:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:18.612 18:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:14:18.612 18:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:14:18.612 18:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:14:18.612 malloc_lvol_verify 00:14:18.612 18:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:14:18.871 83a4a7c5-343d-4aa9-be89-33cef206de55 00:14:18.871 18:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:14:19.131 86fc47f9-e362-48a1-bc21-206830d19231 00:14:19.131 18:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:14:19.131 /dev/nbd0 00:14:19.131 18:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:14:19.131 Discarding device blocks: 0/4096 done 00:14:19.131 Creating filesystem with 4096 1k blocks and 1024 inodes 00:14:19.131 00:14:19.131 Allocating group tables: 0/1 done 00:14:19.131 Writing inode tables: 0/1 done 00:14:19.131 mke2fs 1.46.5 (30-Dec-2021) 00:14:19.131 Creating journal (1024 blocks): done 00:14:19.131 Writing superblocks and filesystem accounting information: 0/1 done 00:14:19.131 00:14:19.131 18:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:14:19.131 18:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:14:19.131 18:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:19.131 18:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:19.131 18:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:19.131 18:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:14:19.131 18:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:19.131 18:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:19.391 18:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:19.391 18:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:19.391 18:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:19.391 18:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:19.391 18:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:19.391 18:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:19.391 18:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:19.391 18:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:19.391 18:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:14:19.391 18:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:14:19.391 18:32:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 85711 00:14:19.391 18:32:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@946 -- # '[' -z 85711 ']' 00:14:19.391 18:32:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@950 -- # kill -0 85711 00:14:19.391 18:32:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@951 -- # uname 00:14:19.391 18:32:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:19.391 18:32:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 85711 00:14:19.391 killing process with pid 85711 00:14:19.392 18:32:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:19.392 18:32:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:19.392 18:32:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # echo 'killing process with pid 85711' 00:14:19.392 18:32:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@965 -- # kill 85711 00:14:19.392 18:32:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@970 -- # wait 85711 00:14:19.961 ************************************ 00:14:19.961 END TEST bdev_nbd 00:14:19.961 ************************************ 00:14:19.961 18:32:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:14:19.961 00:14:19.961 real 0m8.786s 00:14:19.961 user 0m11.956s 00:14:19.961 sys 0m3.592s 00:14:19.961 18:32:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:19.961 18:32:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:14:19.961 18:32:19 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:14:19.961 18:32:19 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:14:19.961 18:32:19 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:14:19.961 18:32:19 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:14:19.961 18:32:19 blockdev_xnvme -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:19.961 18:32:19 blockdev_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:19.961 18:32:19 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:19.961 ************************************ 00:14:19.961 START TEST bdev_fio 00:14:19.961 ************************************ 00:14:19.961 18:32:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1121 -- # fio_test_suite '' 00:14:19.961 18:32:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:14:19.961 18:32:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:14:19.961 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:14:19.961 18:32:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:14:19.961 18:32:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:14:19.961 18:32:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:14:19.961 18:32:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:14:19.961 18:32:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:14:19.961 18:32:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1276 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:19.961 18:32:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1277 -- # local workload=verify 00:14:19.961 18:32:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1278 -- # local bdev_type=AIO 00:14:19.961 18:32:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1279 -- # local env_context= 00:14:19.961 18:32:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local fio_dir=/usr/src/fio 00:14:19.961 18:32:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:14:19.961 18:32:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # '[' -z verify ']' 00:14:19.961 18:32:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -n '' ']' 00:14:19.961 18:32:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:19.961 18:32:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1297 -- # cat 00:14:19.961 18:32:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1309 -- # '[' verify == verify ']' 00:14:19.961 18:32:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1310 -- # cat 00:14:19.961 18:32:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1319 -- # '[' AIO == AIO ']' 00:14:19.961 18:32:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1320 -- # /usr/src/fio/fio --version 00:14:19.961 18:32:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1320 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:14:19.961 18:32:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1321 -- # echo serialize_overlap=1 00:14:19.961 18:32:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:19.961 18:32:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:14:19.961 18:32:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:14:19.961 18:32:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:19.961 18:32:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:14:19.961 18:32:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:14:19.961 18:32:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:19.961 18:32:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:14:19.961 18:32:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:14:19.961 18:32:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:19.961 18:32:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:14:19.962 18:32:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:14:19.962 18:32:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:19.962 18:32:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:14:19.962 18:32:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:14:19.962 18:32:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:19.962 18:32:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:14:19.962 18:32:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:14:19.962 18:32:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:14:19.962 18:32:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:19.962 18:32:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:14:19.962 18:32:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:19.962 18:32:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:14:19.962 ************************************ 00:14:19.962 START TEST bdev_fio_rw_verify 00:14:19.962 ************************************ 00:14:19.962 18:32:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1121 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:19.962 18:32:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:19.962 18:32:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:14:19.962 18:32:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:19.962 18:32:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1335 -- # local sanitizers 00:14:19.962 18:32:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:19.962 18:32:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # shift 00:14:19.962 18:32:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local asan_lib= 00:14:19.962 18:32:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:14:19.962 18:32:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:19.962 18:32:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # grep libasan 00:14:19.962 18:32:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:14:19.962 18:32:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:19.962 18:32:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1342 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:19.962 18:32:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # break 00:14:19.962 18:32:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:19.962 18:32:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:20.220 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:20.220 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:20.220 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:20.220 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:20.220 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:20.220 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:20.220 fio-3.35 00:14:20.220 Starting 6 threads 00:14:32.427 00:14:32.427 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=86093: Tue Jul 23 18:32:30 2024 00:14:32.427 read: IOPS=36.5k, BW=143MiB/s (150MB/s)(1427MiB/10001msec) 00:14:32.427 slat (usec): min=2, max=6169, avg= 8.53, stdev=12.17 00:14:32.427 clat (usec): min=97, max=6993, avg=449.11, stdev=224.12 00:14:32.427 lat (usec): min=100, max=7015, avg=457.64, stdev=225.80 00:14:32.427 clat percentiles (usec): 00:14:32.427 | 50.000th=[ 412], 99.000th=[ 1090], 99.900th=[ 1532], 99.990th=[ 3621], 00:14:32.427 | 99.999th=[ 6915] 00:14:32.427 write: IOPS=36.9k, BW=144MiB/s (151MB/s)(1442MiB/10001msec); 0 zone resets 00:14:32.427 slat (usec): min=10, max=4235, avg=29.50, stdev=41.73 00:14:32.427 clat (usec): min=71, max=6546, avg=579.95, stdev=275.67 00:14:32.427 lat (usec): min=87, max=6631, avg=609.45, stdev=283.44 00:14:32.427 clat percentiles (usec): 00:14:32.427 | 50.000th=[ 537], 99.000th=[ 1385], 99.900th=[ 1860], 99.990th=[ 3458], 00:14:32.427 | 99.999th=[ 6325] 00:14:32.427 bw ( KiB/s): min=119294, max=173518, per=100.00%, avg=147766.68, stdev=2434.72, samples=114 00:14:32.427 iops : min=29823, max=43379, avg=36941.16, stdev=608.69, samples=114 00:14:32.427 lat (usec) : 100=0.01%, 250=13.36%, 500=40.95%, 750=29.01%, 1000=12.04% 00:14:32.427 lat (msec) : 2=4.58%, 4=0.05%, 10=0.01% 00:14:32.427 cpu : usr=55.11%, sys=27.38%, ctx=9171, majf=0, minf=31709 00:14:32.427 IO depths : 1=12.0%, 2=24.4%, 4=50.5%, 8=13.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:32.427 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:32.427 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:32.427 issued rwts: total=365336,369181,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:32.427 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:32.427 00:14:32.427 Run status group 0 (all jobs): 00:14:32.428 READ: bw=143MiB/s (150MB/s), 143MiB/s-143MiB/s (150MB/s-150MB/s), io=1427MiB (1496MB), run=10001-10001msec 00:14:32.428 WRITE: bw=144MiB/s (151MB/s), 144MiB/s-144MiB/s (151MB/s-151MB/s), io=1442MiB (1512MB), run=10001-10001msec 00:14:32.428 ----------------------------------------------------- 00:14:32.428 Suppressions used: 00:14:32.428 count bytes template 00:14:32.428 6 48 /usr/src/fio/parse.c 00:14:32.428 3588 344448 /usr/src/fio/iolog.c 00:14:32.428 1 8 libtcmalloc_minimal.so 00:14:32.428 1 904 libcrypto.so 00:14:32.428 ----------------------------------------------------- 00:14:32.428 00:14:32.428 00:14:32.428 real 0m11.230s 00:14:32.428 user 0m33.813s 00:14:32.428 sys 0m16.835s 00:14:32.428 18:32:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:32.428 18:32:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:14:32.428 ************************************ 00:14:32.428 END TEST bdev_fio_rw_verify 00:14:32.428 ************************************ 00:14:32.428 18:32:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:14:32.428 18:32:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:32.428 18:32:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:14:32.428 18:32:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1276 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:32.428 18:32:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1277 -- # local workload=trim 00:14:32.428 18:32:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1278 -- # local bdev_type= 00:14:32.428 18:32:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1279 -- # local env_context= 00:14:32.428 18:32:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local fio_dir=/usr/src/fio 00:14:32.428 18:32:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:14:32.428 18:32:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # '[' -z trim ']' 00:14:32.428 18:32:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -n '' ']' 00:14:32.428 18:32:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:32.428 18:32:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1297 -- # cat 00:14:32.428 18:32:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1309 -- # '[' trim == verify ']' 00:14:32.428 18:32:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # '[' trim == trim ']' 00:14:32.428 18:32:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # echo rw=trimwrite 00:14:32.428 18:32:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:14:32.428 18:32:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "31a9ed1a-5553-4ede-88ba-636a591febb2"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "31a9ed1a-5553-4ede-88ba-636a591febb2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "23a807d1-bba1-4989-b40f-3d66fb4f629e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "23a807d1-bba1-4989-b40f-3d66fb4f629e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "d898319a-f535-4453-a426-8d37243035c4"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d898319a-f535-4453-a426-8d37243035c4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "d332a229-7f2c-420b-9e66-9b8eb3eabb22"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "d332a229-7f2c-420b-9e66-9b8eb3eabb22",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "a5c4613a-f4f3-4ec7-a34b-0b9aad1c7d28"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "a5c4613a-f4f3-4ec7-a34b-0b9aad1c7d28",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "526a9576-6805-4649-997a-761a6a3f7139"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "526a9576-6805-4649-997a-761a6a3f7139",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' 00:14:32.428 18:32:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:14:32.428 18:32:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:32.428 /home/vagrant/spdk_repo/spdk 00:14:32.428 18:32:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:14:32.428 18:32:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:14:32.428 18:32:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:14:32.428 00:14:32.428 real 0m11.448s 00:14:32.428 user 0m33.922s 00:14:32.428 sys 0m16.950s 00:14:32.428 18:32:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:32.428 18:32:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:14:32.428 ************************************ 00:14:32.428 END TEST bdev_fio 00:14:32.428 ************************************ 00:14:32.428 18:32:31 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:14:32.428 18:32:31 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:14:32.428 18:32:31 blockdev_xnvme -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:14:32.428 18:32:31 blockdev_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:32.428 18:32:31 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:32.428 ************************************ 00:14:32.428 START TEST bdev_verify 00:14:32.428 ************************************ 00:14:32.428 18:32:31 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:14:32.428 [2024-07-23 18:32:31.429296] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:14:32.428 [2024-07-23 18:32:31.429434] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86260 ] 00:14:32.428 [2024-07-23 18:32:31.574073] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:32.428 [2024-07-23 18:32:31.646832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:32.428 [2024-07-23 18:32:31.646960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:32.428 Running I/O for 5 seconds... 00:14:37.704 00:14:37.704 Latency(us) 00:14:37.704 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:37.704 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:37.704 Verification LBA range: start 0x0 length 0x80000 00:14:37.704 nvme0n1 : 5.04 2310.34 9.02 0.00 0.00 55326.03 8986.16 49910.39 00:14:37.704 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:37.704 Verification LBA range: start 0x80000 length 0x80000 00:14:37.704 nvme0n1 : 5.06 1797.07 7.02 0.00 0.00 70744.47 11390.10 69599.80 00:14:37.704 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:37.704 Verification LBA range: start 0x0 length 0x80000 00:14:37.704 nvme0n2 : 5.05 2282.69 8.92 0.00 0.00 55935.37 10016.42 51284.07 00:14:37.704 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:37.704 Verification LBA range: start 0x80000 length 0x80000 00:14:37.704 nvme0n2 : 5.06 1796.53 7.02 0.00 0.00 70681.71 11561.81 69599.80 00:14:37.704 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:37.704 Verification LBA range: start 0x0 length 0x80000 00:14:37.704 nvme0n3 : 5.03 2290.78 8.95 0.00 0.00 55680.11 9444.05 51284.07 00:14:37.704 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:37.704 Verification LBA range: start 0x80000 length 0x80000 00:14:37.704 nvme0n3 : 5.06 1795.78 7.01 0.00 0.00 70654.54 6868.40 70515.59 00:14:37.704 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:37.704 Verification LBA range: start 0x0 length 0x20000 00:14:37.704 nvme1n1 : 5.04 2284.12 8.92 0.00 0.00 55787.82 9959.18 49681.44 00:14:37.704 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:37.704 Verification LBA range: start 0x20000 length 0x20000 00:14:37.704 nvme1n1 : 5.05 1797.99 7.02 0.00 0.00 71073.65 10245.37 70515.59 00:14:37.704 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:37.704 Verification LBA range: start 0x0 length 0xbd0bd 00:14:37.704 nvme2n1 : 5.05 3009.74 11.76 0.00 0.00 42228.83 4950.97 44186.72 00:14:37.704 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:37.704 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:14:37.704 nvme2n1 : 5.05 2788.26 10.89 0.00 0.00 45663.39 5208.54 55863.00 00:14:37.704 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:37.704 Verification LBA range: start 0x0 length 0xa0000 00:14:37.704 nvme3n1 : 5.06 2328.85 9.10 0.00 0.00 54564.53 2947.69 51055.12 00:14:37.704 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:37.704 Verification LBA range: start 0xa0000 length 0xa0000 00:14:37.704 nvme3n1 : 5.06 1797.50 7.02 0.00 0.00 70802.69 10760.50 60899.83 00:14:37.705 =================================================================================================================== 00:14:37.705 Total : 26279.65 102.65 0.00 0.00 58152.40 2947.69 70515.59 00:14:37.705 00:14:37.705 real 0m6.007s 00:14:37.705 user 0m9.305s 00:14:37.705 sys 0m1.699s 00:14:37.705 18:32:37 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:37.705 18:32:37 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:14:37.705 ************************************ 00:14:37.705 END TEST bdev_verify 00:14:37.705 ************************************ 00:14:37.705 18:32:37 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:14:37.705 18:32:37 blockdev_xnvme -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:14:37.705 18:32:37 blockdev_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:37.705 18:32:37 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:37.705 ************************************ 00:14:37.705 START TEST bdev_verify_big_io 00:14:37.705 ************************************ 00:14:37.705 18:32:37 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:14:37.705 [2024-07-23 18:32:37.511676] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:14:37.705 [2024-07-23 18:32:37.511822] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86357 ] 00:14:37.705 [2024-07-23 18:32:37.644749] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:37.705 [2024-07-23 18:32:37.713853] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:37.705 [2024-07-23 18:32:37.713970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:37.965 Running I/O for 5 seconds... 00:14:44.538 00:14:44.538 Latency(us) 00:14:44.538 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:44.538 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:44.538 Verification LBA range: start 0x0 length 0x8000 00:14:44.538 nvme0n1 : 5.52 197.07 12.32 0.00 0.00 626155.19 43957.77 677682.31 00:14:44.538 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:44.538 Verification LBA range: start 0x8000 length 0x8000 00:14:44.538 nvme0n1 : 5.68 101.43 6.34 0.00 0.00 1207267.28 14996.01 2256498.92 00:14:44.538 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:44.538 Verification LBA range: start 0x0 length 0x8000 00:14:44.538 nvme0n2 : 5.52 243.32 15.21 0.00 0.00 509492.66 80131.35 456061.88 00:14:44.538 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:44.538 Verification LBA range: start 0x8000 length 0x8000 00:14:44.538 nvme0n2 : 5.68 98.56 6.16 0.00 0.00 1193579.41 106231.28 2036710.06 00:14:44.538 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:44.538 Verification LBA range: start 0x0 length 0x8000 00:14:44.538 nvme0n3 : 5.54 208.08 13.01 0.00 0.00 586115.34 85168.18 604419.35 00:14:44.538 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:44.538 Verification LBA range: start 0x8000 length 0x8000 00:14:44.538 nvme0n3 : 5.63 120.21 7.51 0.00 0.00 952360.89 41897.25 1663069.01 00:14:44.538 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:44.538 Verification LBA range: start 0x0 length 0x2000 00:14:44.538 nvme1n1 : 5.54 229.67 14.35 0.00 0.00 522530.04 82878.71 461556.60 00:14:44.538 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:44.538 Verification LBA range: start 0x2000 length 0x2000 00:14:44.538 nvme1n1 : 5.85 150.38 9.40 0.00 0.00 737315.38 40065.68 1780289.73 00:14:44.538 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:44.538 Verification LBA range: start 0x0 length 0xbd0b 00:14:44.538 nvme2n1 : 5.54 225.08 14.07 0.00 0.00 528125.47 10417.08 1648416.42 00:14:44.538 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:44.538 Verification LBA range: start 0xbd0b length 0xbd0b 00:14:44.538 nvme2n1 : 5.94 169.57 10.60 0.00 0.00 630817.16 13336.15 1919489.34 00:14:44.538 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:44.538 Verification LBA range: start 0x0 length 0xa000 00:14:44.538 nvme3n1 : 5.55 253.85 15.87 0.00 0.00 457854.12 12019.70 556798.43 00:14:44.538 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:44.538 Verification LBA range: start 0xa000 length 0xa000 00:14:44.538 nvme3n1 : 6.06 200.54 12.53 0.00 0.00 518863.87 554.48 2212541.15 00:14:44.538 =================================================================================================================== 00:14:44.538 Total : 2197.76 137.36 0.00 0.00 638330.98 554.48 2256498.92 00:14:44.538 00:14:44.538 real 0m7.043s 00:14:44.538 user 0m12.684s 00:14:44.538 sys 0m0.649s 00:14:44.538 18:32:44 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:44.538 18:32:44 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.538 ************************************ 00:14:44.538 END TEST bdev_verify_big_io 00:14:44.538 ************************************ 00:14:44.538 18:32:44 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:44.538 18:32:44 blockdev_xnvme -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:14:44.538 18:32:44 blockdev_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:44.538 18:32:44 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:44.538 ************************************ 00:14:44.538 START TEST bdev_write_zeroes 00:14:44.538 ************************************ 00:14:44.538 18:32:44 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:44.797 [2024-07-23 18:32:44.618949] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:14:44.797 [2024-07-23 18:32:44.619065] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86457 ] 00:14:44.797 [2024-07-23 18:32:44.766440] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:44.797 [2024-07-23 18:32:44.834299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:45.056 Running I/O for 1 seconds... 00:14:46.434 00:14:46.434 Latency(us) 00:14:46.434 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:46.434 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:46.434 nvme0n1 : 1.01 10536.02 41.16 0.00 0.00 12135.58 9386.82 28503.87 00:14:46.434 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:46.434 nvme0n2 : 1.01 10514.96 41.07 0.00 0.00 12152.58 9501.29 27702.55 00:14:46.434 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:46.434 nvme0n3 : 1.02 10500.18 41.02 0.00 0.00 12162.19 9386.82 26901.24 00:14:46.434 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:46.434 nvme1n1 : 1.03 10483.81 40.95 0.00 0.00 12173.84 9215.11 27931.50 00:14:46.435 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:46.435 nvme2n1 : 1.02 12196.29 47.64 0.00 0.00 10457.47 5838.14 19918.37 00:14:46.435 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:46.435 nvme3n1 : 1.03 10468.59 40.89 0.00 0.00 12102.12 3806.24 30678.86 00:14:46.435 =================================================================================================================== 00:14:46.435 Total : 64699.87 252.73 0.00 0.00 11826.71 3806.24 30678.86 00:14:46.435 00:14:46.435 real 0m1.934s 00:14:46.435 user 0m1.169s 00:14:46.435 sys 0m0.605s 00:14:46.435 18:32:46 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:46.435 18:32:46 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:14:46.435 ************************************ 00:14:46.435 END TEST bdev_write_zeroes 00:14:46.435 ************************************ 00:14:46.695 18:32:46 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:46.695 18:32:46 blockdev_xnvme -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:14:46.695 18:32:46 blockdev_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:46.695 18:32:46 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:46.695 ************************************ 00:14:46.695 START TEST bdev_json_nonenclosed 00:14:46.695 ************************************ 00:14:46.695 18:32:46 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:46.695 [2024-07-23 18:32:46.619180] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:14:46.695 [2024-07-23 18:32:46.619305] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86499 ] 00:14:46.954 [2024-07-23 18:32:46.751388] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:46.954 [2024-07-23 18:32:46.819012] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:46.954 [2024-07-23 18:32:46.819113] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:14:46.954 [2024-07-23 18:32:46.819137] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:14:46.954 [2024-07-23 18:32:46.819156] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:46.954 00:14:46.954 real 0m0.429s 00:14:46.954 user 0m0.201s 00:14:46.954 sys 0m0.125s 00:14:46.954 18:32:46 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:46.954 18:32:46 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:14:46.954 ************************************ 00:14:46.954 END TEST bdev_json_nonenclosed 00:14:46.954 ************************************ 00:14:47.214 18:32:47 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:47.214 18:32:47 blockdev_xnvme -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:14:47.214 18:32:47 blockdev_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:47.214 18:32:47 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:47.214 ************************************ 00:14:47.214 START TEST bdev_json_nonarray 00:14:47.214 ************************************ 00:14:47.214 18:32:47 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:47.214 [2024-07-23 18:32:47.114746] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:14:47.214 [2024-07-23 18:32:47.114903] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86519 ] 00:14:47.214 [2024-07-23 18:32:47.261795] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:47.474 [2024-07-23 18:32:47.332504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:47.474 [2024-07-23 18:32:47.332619] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:14:47.474 [2024-07-23 18:32:47.332648] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:14:47.474 [2024-07-23 18:32:47.332660] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:47.474 00:14:47.474 real 0m0.449s 00:14:47.474 user 0m0.213s 00:14:47.474 sys 0m0.132s 00:14:47.474 18:32:47 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:47.474 18:32:47 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:14:47.474 ************************************ 00:14:47.474 END TEST bdev_json_nonarray 00:14:47.474 ************************************ 00:14:47.733 18:32:47 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:14:47.733 18:32:47 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:14:47.733 18:32:47 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:14:47.733 18:32:47 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:14:47.733 18:32:47 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:14:47.733 18:32:47 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:14:47.733 18:32:47 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:47.733 18:32:47 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:14:47.733 18:32:47 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:14:47.733 18:32:47 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:14:47.733 18:32:47 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:14:47.733 18:32:47 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:48.301 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:03.180 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:15:03.180 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:15:03.180 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:15:11.330 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:15:11.330 00:15:11.330 real 1m10.425s 00:15:11.330 user 1m22.523s 00:15:11.330 sys 1m28.762s 00:15:11.330 18:33:11 blockdev_xnvme -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:11.330 18:33:11 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:11.330 ************************************ 00:15:11.330 END TEST blockdev_xnvme 00:15:11.330 ************************************ 00:15:11.330 18:33:11 -- spdk/autotest.sh@251 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:15:11.330 18:33:11 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:11.330 18:33:11 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:11.330 18:33:11 -- common/autotest_common.sh@10 -- # set +x 00:15:11.330 ************************************ 00:15:11.330 START TEST ublk 00:15:11.330 ************************************ 00:15:11.330 18:33:11 ublk -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:15:11.598 * Looking for test storage... 00:15:11.598 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:15:11.598 18:33:11 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:15:11.598 18:33:11 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:15:11.598 18:33:11 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:15:11.598 18:33:11 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:15:11.598 18:33:11 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:15:11.598 18:33:11 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:15:11.598 18:33:11 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:15:11.598 18:33:11 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:15:11.598 18:33:11 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:15:11.598 18:33:11 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:15:11.598 18:33:11 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:15:11.598 18:33:11 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:15:11.598 18:33:11 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:15:11.598 18:33:11 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:15:11.598 18:33:11 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:15:11.598 18:33:11 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:15:11.598 18:33:11 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:15:11.598 18:33:11 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:15:11.598 18:33:11 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:15:11.598 18:33:11 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:15:11.598 18:33:11 ublk -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:11.598 18:33:11 ublk -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:11.598 18:33:11 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:11.598 ************************************ 00:15:11.598 START TEST test_save_ublk_config 00:15:11.598 ************************************ 00:15:11.598 18:33:11 ublk.test_save_ublk_config -- common/autotest_common.sh@1121 -- # test_save_config 00:15:11.598 18:33:11 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:15:11.598 18:33:11 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=86823 00:15:11.598 18:33:11 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:15:11.598 18:33:11 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:15:11.598 18:33:11 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 86823 00:15:11.598 18:33:11 ublk.test_save_ublk_config -- common/autotest_common.sh@827 -- # '[' -z 86823 ']' 00:15:11.598 18:33:11 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:11.598 18:33:11 ublk.test_save_ublk_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:11.598 18:33:11 ublk.test_save_ublk_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:11.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:11.598 18:33:11 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:11.598 18:33:11 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:11.598 [2024-07-23 18:33:11.607727] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:15:11.598 [2024-07-23 18:33:11.607867] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86823 ] 00:15:11.856 [2024-07-23 18:33:11.756731] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:11.856 [2024-07-23 18:33:11.833668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:12.425 18:33:12 ublk.test_save_ublk_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:12.425 18:33:12 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # return 0 00:15:12.425 18:33:12 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:15:12.425 18:33:12 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:15:12.425 18:33:12 ublk.test_save_ublk_config -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.425 18:33:12 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:12.425 [2024-07-23 18:33:12.390639] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:12.425 [2024-07-23 18:33:12.391001] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:12.425 malloc0 00:15:12.425 [2024-07-23 18:33:12.430709] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:15:12.425 [2024-07-23 18:33:12.430791] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:15:12.425 [2024-07-23 18:33:12.430814] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:12.425 [2024-07-23 18:33:12.430829] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:12.425 [2024-07-23 18:33:12.437687] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:12.425 [2024-07-23 18:33:12.437712] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:12.425 [2024-07-23 18:33:12.445605] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:12.425 [2024-07-23 18:33:12.445714] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:12.425 [2024-07-23 18:33:12.468608] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:12.425 0 00:15:12.425 18:33:12 ublk.test_save_ublk_config -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.425 18:33:12 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:15:12.425 18:33:12 ublk.test_save_ublk_config -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.425 18:33:12 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:12.684 18:33:12 ublk.test_save_ublk_config -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.684 18:33:12 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:15:12.684 "subsystems": [ 00:15:12.684 { 00:15:12.684 "subsystem": "keyring", 00:15:12.684 "config": [] 00:15:12.684 }, 00:15:12.684 { 00:15:12.684 "subsystem": "iobuf", 00:15:12.684 "config": [ 00:15:12.684 { 00:15:12.684 "method": "iobuf_set_options", 00:15:12.684 "params": { 00:15:12.684 "small_pool_count": 8192, 00:15:12.684 "large_pool_count": 1024, 00:15:12.684 "small_bufsize": 8192, 00:15:12.684 "large_bufsize": 135168 00:15:12.684 } 00:15:12.684 } 00:15:12.684 ] 00:15:12.684 }, 00:15:12.684 { 00:15:12.684 "subsystem": "sock", 00:15:12.684 "config": [ 00:15:12.684 { 00:15:12.684 "method": "sock_set_default_impl", 00:15:12.684 "params": { 00:15:12.684 "impl_name": "posix" 00:15:12.684 } 00:15:12.684 }, 00:15:12.684 { 00:15:12.684 "method": "sock_impl_set_options", 00:15:12.684 "params": { 00:15:12.684 "impl_name": "ssl", 00:15:12.684 "recv_buf_size": 4096, 00:15:12.684 "send_buf_size": 4096, 00:15:12.684 "enable_recv_pipe": true, 00:15:12.684 "enable_quickack": false, 00:15:12.684 "enable_placement_id": 0, 00:15:12.684 "enable_zerocopy_send_server": true, 00:15:12.684 "enable_zerocopy_send_client": false, 00:15:12.684 "zerocopy_threshold": 0, 00:15:12.684 "tls_version": 0, 00:15:12.684 "enable_ktls": false 00:15:12.684 } 00:15:12.684 }, 00:15:12.684 { 00:15:12.684 "method": "sock_impl_set_options", 00:15:12.684 "params": { 00:15:12.684 "impl_name": "posix", 00:15:12.684 "recv_buf_size": 2097152, 00:15:12.684 "send_buf_size": 2097152, 00:15:12.684 "enable_recv_pipe": true, 00:15:12.684 "enable_quickack": false, 00:15:12.684 "enable_placement_id": 0, 00:15:12.684 "enable_zerocopy_send_server": true, 00:15:12.684 "enable_zerocopy_send_client": false, 00:15:12.684 "zerocopy_threshold": 0, 00:15:12.684 "tls_version": 0, 00:15:12.684 "enable_ktls": false 00:15:12.684 } 00:15:12.684 } 00:15:12.684 ] 00:15:12.684 }, 00:15:12.684 { 00:15:12.684 "subsystem": "vmd", 00:15:12.684 "config": [] 00:15:12.684 }, 00:15:12.684 { 00:15:12.684 "subsystem": "accel", 00:15:12.684 "config": [ 00:15:12.684 { 00:15:12.684 "method": "accel_set_options", 00:15:12.684 "params": { 00:15:12.684 "small_cache_size": 128, 00:15:12.684 "large_cache_size": 16, 00:15:12.684 "task_count": 2048, 00:15:12.684 "sequence_count": 2048, 00:15:12.684 "buf_count": 2048 00:15:12.684 } 00:15:12.684 } 00:15:12.684 ] 00:15:12.684 }, 00:15:12.684 { 00:15:12.684 "subsystem": "bdev", 00:15:12.684 "config": [ 00:15:12.684 { 00:15:12.684 "method": "bdev_set_options", 00:15:12.684 "params": { 00:15:12.684 "bdev_io_pool_size": 65535, 00:15:12.684 "bdev_io_cache_size": 256, 00:15:12.684 "bdev_auto_examine": true, 00:15:12.684 "iobuf_small_cache_size": 128, 00:15:12.684 "iobuf_large_cache_size": 16 00:15:12.684 } 00:15:12.684 }, 00:15:12.684 { 00:15:12.684 "method": "bdev_raid_set_options", 00:15:12.684 "params": { 00:15:12.684 "process_window_size_kb": 1024 00:15:12.684 } 00:15:12.684 }, 00:15:12.684 { 00:15:12.684 "method": "bdev_iscsi_set_options", 00:15:12.684 "params": { 00:15:12.684 "timeout_sec": 30 00:15:12.684 } 00:15:12.684 }, 00:15:12.684 { 00:15:12.684 "method": "bdev_nvme_set_options", 00:15:12.684 "params": { 00:15:12.684 "action_on_timeout": "none", 00:15:12.684 "timeout_us": 0, 00:15:12.684 "timeout_admin_us": 0, 00:15:12.684 "keep_alive_timeout_ms": 10000, 00:15:12.684 "arbitration_burst": 0, 00:15:12.684 "low_priority_weight": 0, 00:15:12.684 "medium_priority_weight": 0, 00:15:12.684 "high_priority_weight": 0, 00:15:12.684 "nvme_adminq_poll_period_us": 10000, 00:15:12.684 "nvme_ioq_poll_period_us": 0, 00:15:12.684 "io_queue_requests": 0, 00:15:12.684 "delay_cmd_submit": true, 00:15:12.684 "transport_retry_count": 4, 00:15:12.684 "bdev_retry_count": 3, 00:15:12.684 "transport_ack_timeout": 0, 00:15:12.684 "ctrlr_loss_timeout_sec": 0, 00:15:12.684 "reconnect_delay_sec": 0, 00:15:12.684 "fast_io_fail_timeout_sec": 0, 00:15:12.684 "disable_auto_failback": false, 00:15:12.684 "generate_uuids": false, 00:15:12.684 "transport_tos": 0, 00:15:12.684 "nvme_error_stat": false, 00:15:12.684 "rdma_srq_size": 0, 00:15:12.684 "io_path_stat": false, 00:15:12.684 "allow_accel_sequence": false, 00:15:12.684 "rdma_max_cq_size": 0, 00:15:12.684 "rdma_cm_event_timeout_ms": 0, 00:15:12.684 "dhchap_digests": [ 00:15:12.684 "sha256", 00:15:12.684 "sha384", 00:15:12.684 "sha512" 00:15:12.684 ], 00:15:12.684 "dhchap_dhgroups": [ 00:15:12.684 "null", 00:15:12.684 "ffdhe2048", 00:15:12.684 "ffdhe3072", 00:15:12.684 "ffdhe4096", 00:15:12.684 "ffdhe6144", 00:15:12.684 "ffdhe8192" 00:15:12.684 ] 00:15:12.684 } 00:15:12.684 }, 00:15:12.684 { 00:15:12.684 "method": "bdev_nvme_set_hotplug", 00:15:12.684 "params": { 00:15:12.684 "period_us": 100000, 00:15:12.684 "enable": false 00:15:12.684 } 00:15:12.684 }, 00:15:12.684 { 00:15:12.684 "method": "bdev_malloc_create", 00:15:12.684 "params": { 00:15:12.685 "name": "malloc0", 00:15:12.685 "num_blocks": 8192, 00:15:12.685 "block_size": 4096, 00:15:12.685 "physical_block_size": 4096, 00:15:12.685 "uuid": "29db7b8e-9980-49f1-9d05-f8e7e94e68cc", 00:15:12.685 "optimal_io_boundary": 0 00:15:12.685 } 00:15:12.685 }, 00:15:12.685 { 00:15:12.685 "method": "bdev_wait_for_examine" 00:15:12.685 } 00:15:12.685 ] 00:15:12.685 }, 00:15:12.685 { 00:15:12.685 "subsystem": "scsi", 00:15:12.685 "config": null 00:15:12.685 }, 00:15:12.685 { 00:15:12.685 "subsystem": "scheduler", 00:15:12.685 "config": [ 00:15:12.685 { 00:15:12.685 "method": "framework_set_scheduler", 00:15:12.685 "params": { 00:15:12.685 "name": "static" 00:15:12.685 } 00:15:12.685 } 00:15:12.685 ] 00:15:12.685 }, 00:15:12.685 { 00:15:12.685 "subsystem": "vhost_scsi", 00:15:12.685 "config": [] 00:15:12.685 }, 00:15:12.685 { 00:15:12.685 "subsystem": "vhost_blk", 00:15:12.685 "config": [] 00:15:12.685 }, 00:15:12.685 { 00:15:12.685 "subsystem": "ublk", 00:15:12.685 "config": [ 00:15:12.685 { 00:15:12.685 "method": "ublk_create_target", 00:15:12.685 "params": { 00:15:12.685 "cpumask": "1" 00:15:12.685 } 00:15:12.685 }, 00:15:12.685 { 00:15:12.685 "method": "ublk_start_disk", 00:15:12.685 "params": { 00:15:12.685 "bdev_name": "malloc0", 00:15:12.685 "ublk_id": 0, 00:15:12.685 "num_queues": 1, 00:15:12.685 "queue_depth": 128 00:15:12.685 } 00:15:12.685 } 00:15:12.685 ] 00:15:12.685 }, 00:15:12.685 { 00:15:12.685 "subsystem": "nbd", 00:15:12.685 "config": [] 00:15:12.685 }, 00:15:12.685 { 00:15:12.685 "subsystem": "nvmf", 00:15:12.685 "config": [ 00:15:12.685 { 00:15:12.685 "method": "nvmf_set_config", 00:15:12.685 "params": { 00:15:12.685 "discovery_filter": "match_any", 00:15:12.685 "admin_cmd_passthru": { 00:15:12.685 "identify_ctrlr": false 00:15:12.685 } 00:15:12.685 } 00:15:12.685 }, 00:15:12.685 { 00:15:12.685 "method": "nvmf_set_max_subsystems", 00:15:12.685 "params": { 00:15:12.685 "max_subsystems": 1024 00:15:12.685 } 00:15:12.685 }, 00:15:12.685 { 00:15:12.685 "method": "nvmf_set_crdt", 00:15:12.685 "params": { 00:15:12.685 "crdt1": 0, 00:15:12.685 "crdt2": 0, 00:15:12.685 "crdt3": 0 00:15:12.685 } 00:15:12.685 } 00:15:12.685 ] 00:15:12.685 }, 00:15:12.685 { 00:15:12.685 "subsystem": "iscsi", 00:15:12.685 "config": [ 00:15:12.685 { 00:15:12.685 "method": "iscsi_set_options", 00:15:12.685 "params": { 00:15:12.685 "node_base": "iqn.2016-06.io.spdk", 00:15:12.685 "max_sessions": 128, 00:15:12.685 "max_connections_per_session": 2, 00:15:12.685 "max_queue_depth": 64, 00:15:12.685 "default_time2wait": 2, 00:15:12.685 "default_time2retain": 20, 00:15:12.685 "first_burst_length": 8192, 00:15:12.685 "immediate_data": true, 00:15:12.685 "allow_duplicated_isid": false, 00:15:12.685 "error_recovery_level": 0, 00:15:12.685 "nop_timeout": 60, 00:15:12.685 "nop_in_interval": 30, 00:15:12.685 "disable_chap": false, 00:15:12.685 "require_chap": false, 00:15:12.685 "mutual_chap": false, 00:15:12.685 "chap_group": 0, 00:15:12.685 "max_large_datain_per_connection": 64, 00:15:12.685 "max_r2t_per_connection": 4, 00:15:12.685 "pdu_pool_size": 36864, 00:15:12.685 "immediate_data_pool_size": 16384, 00:15:12.685 "data_out_pool_size": 2048 00:15:12.685 } 00:15:12.685 } 00:15:12.685 ] 00:15:12.685 } 00:15:12.685 ] 00:15:12.685 }' 00:15:12.685 18:33:12 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 86823 00:15:12.685 18:33:12 ublk.test_save_ublk_config -- common/autotest_common.sh@946 -- # '[' -z 86823 ']' 00:15:12.685 18:33:12 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # kill -0 86823 00:15:12.685 18:33:12 ublk.test_save_ublk_config -- common/autotest_common.sh@951 -- # uname 00:15:12.685 18:33:12 ublk.test_save_ublk_config -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:12.685 18:33:12 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 86823 00:15:12.685 18:33:12 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:12.685 18:33:12 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:12.685 killing process with pid 86823 00:15:12.685 18:33:12 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # echo 'killing process with pid 86823' 00:15:12.685 18:33:12 ublk.test_save_ublk_config -- common/autotest_common.sh@965 -- # kill 86823 00:15:12.685 18:33:12 ublk.test_save_ublk_config -- common/autotest_common.sh@970 -- # wait 86823 00:15:13.254 [2024-07-23 18:33:13.216245] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:13.254 [2024-07-23 18:33:13.251673] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:13.254 [2024-07-23 18:33:13.254584] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:13.254 [2024-07-23 18:33:13.261591] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:13.254 [2024-07-23 18:33:13.261655] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:13.254 [2024-07-23 18:33:13.261676] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:13.254 [2024-07-23 18:33:13.261704] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:15:13.254 [2024-07-23 18:33:13.261866] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:15:13.822 18:33:13 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=86861 00:15:13.822 18:33:13 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 86861 00:15:13.822 18:33:13 ublk.test_save_ublk_config -- common/autotest_common.sh@827 -- # '[' -z 86861 ']' 00:15:13.822 18:33:13 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:13.822 18:33:13 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:15:13.822 "subsystems": [ 00:15:13.822 { 00:15:13.822 "subsystem": "keyring", 00:15:13.822 "config": [] 00:15:13.822 }, 00:15:13.822 { 00:15:13.822 "subsystem": "iobuf", 00:15:13.822 "config": [ 00:15:13.822 { 00:15:13.822 "method": "iobuf_set_options", 00:15:13.822 "params": { 00:15:13.822 "small_pool_count": 8192, 00:15:13.822 "large_pool_count": 1024, 00:15:13.822 "small_bufsize": 8192, 00:15:13.822 "large_bufsize": 135168 00:15:13.822 } 00:15:13.822 } 00:15:13.822 ] 00:15:13.822 }, 00:15:13.822 { 00:15:13.822 "subsystem": "sock", 00:15:13.822 "config": [ 00:15:13.822 { 00:15:13.822 "method": "sock_set_default_impl", 00:15:13.822 "params": { 00:15:13.822 "impl_name": "posix" 00:15:13.822 } 00:15:13.822 }, 00:15:13.822 { 00:15:13.822 "method": "sock_impl_set_options", 00:15:13.822 "params": { 00:15:13.822 "impl_name": "ssl", 00:15:13.822 "recv_buf_size": 4096, 00:15:13.822 "send_buf_size": 4096, 00:15:13.822 "enable_recv_pipe": true, 00:15:13.822 "enable_quickack": false, 00:15:13.822 "enable_placement_id": 0, 00:15:13.822 "enable_zerocopy_send_server": true, 00:15:13.822 "enable_zerocopy_send_client": false, 00:15:13.822 "zerocopy_threshold": 0, 00:15:13.822 "tls_version": 0, 00:15:13.822 "enable_ktls": false 00:15:13.822 } 00:15:13.822 }, 00:15:13.822 { 00:15:13.822 "method": "sock_impl_set_options", 00:15:13.822 "params": { 00:15:13.822 "impl_name": "posix", 00:15:13.822 "recv_buf_size": 2097152, 00:15:13.822 "send_buf_size": 2097152, 00:15:13.822 "enable_recv_pipe": true, 00:15:13.822 "enable_quickack": false, 00:15:13.822 "enable_placement_id": 0, 00:15:13.822 "enable_zerocopy_send_server": true, 00:15:13.822 "enable_zerocopy_send_client": false, 00:15:13.822 "zerocopy_threshold": 0, 00:15:13.822 "tls_version": 0, 00:15:13.822 "enable_ktls": false 00:15:13.822 } 00:15:13.822 } 00:15:13.822 ] 00:15:13.822 }, 00:15:13.822 { 00:15:13.822 "subsystem": "vmd", 00:15:13.822 "config": [] 00:15:13.822 }, 00:15:13.822 { 00:15:13.822 "subsystem": "accel", 00:15:13.822 "config": [ 00:15:13.822 { 00:15:13.822 "method": "accel_set_options", 00:15:13.822 "params": { 00:15:13.822 "small_cache_size": 128, 00:15:13.822 "large_cache_size": 16, 00:15:13.822 "task_count": 2048, 00:15:13.822 "sequence_count": 2048, 00:15:13.822 "buf_count": 2048 00:15:13.822 } 00:15:13.822 } 00:15:13.822 ] 00:15:13.822 }, 00:15:13.822 { 00:15:13.822 "subsystem": "bdev", 00:15:13.822 "config": [ 00:15:13.822 { 00:15:13.822 "method": "bdev_set_options", 00:15:13.823 "params": { 00:15:13.823 "bdev_io_pool_size": 65535, 00:15:13.823 "bdev_io_cache_size": 256, 00:15:13.823 "bdev_auto_examine": true, 00:15:13.823 "iobuf_small_cache_size": 128, 00:15:13.823 "iobuf_large_cache_size": 16 00:15:13.823 } 00:15:13.823 }, 00:15:13.823 { 00:15:13.823 "method": "bdev_raid_set_options", 00:15:13.823 "params": { 00:15:13.823 "process_window_size_kb": 1024 00:15:13.823 } 00:15:13.823 }, 00:15:13.823 { 00:15:13.823 "method": "bdev_iscsi_set_options", 00:15:13.823 "params": { 00:15:13.823 "timeout_sec": 30 00:15:13.823 } 00:15:13.823 }, 00:15:13.823 { 00:15:13.823 "method": "bdev_nvme_set_options", 00:15:13.823 "params": { 00:15:13.823 "action_on_timeout": "none", 00:15:13.823 "timeout_us": 0, 00:15:13.823 "timeout_admin_us": 0, 00:15:13.823 "keep_alive_timeout_ms": 10000, 00:15:13.823 "arbitration_burst": 0, 00:15:13.823 "low_priority_weight": 0, 00:15:13.823 "medium_priority_weight": 0, 00:15:13.823 "high_priority_weight": 0, 00:15:13.823 "nvme_adminq_poll_period_us": 10000, 00:15:13.823 "nvme_ioq_poll_period_us": 0, 00:15:13.823 "io_queue_requests": 0, 00:15:13.823 "delay_cmd_submit": true, 00:15:13.823 "transport_retry_count": 4, 00:15:13.823 "bdev_retry_count": 3, 00:15:13.823 "transport_ack_timeout": 0, 00:15:13.823 "ctrlr_loss_timeout_sec": 0, 00:15:13.823 "reconnect_delay_sec": 0, 00:15:13.823 "fast_io_fail_timeout_sec": 0, 00:15:13.823 "disable_auto_failback": false, 00:15:13.823 "generate_uuids": false, 00:15:13.823 "transport_tos": 0, 00:15:13.823 "nvme_error_stat": false, 00:15:13.823 "rdma_srq_size": 0, 00:15:13.823 "io_path_stat": false, 00:15:13.823 "allow_accel_sequence": false, 00:15:13.823 "rdma_max_cq_size": 0, 00:15:13.823 "rdma_cm_event_timeout_ms": 0, 00:15:13.823 "dhchap_digests": [ 00:15:13.823 "sha256", 00:15:13.823 "sha384", 00:15:13.823 "sha512" 00:15:13.823 ], 00:15:13.823 "dhchap_dhgroups": [ 00:15:13.823 "null", 00:15:13.823 "ffdhe2048", 00:15:13.823 "ffdhe3072", 00:15:13.823 "ffdhe4096", 00:15:13.823 "ffdhe6144", 00:15:13.823 "ffdhe8192" 00:15:13.823 ] 00:15:13.823 } 00:15:13.823 }, 00:15:13.823 { 00:15:13.823 "method": "bdev_nvme_set_hotplug", 00:15:13.823 "params": { 00:15:13.823 "period_us": 100000, 00:15:13.823 "enable": false 00:15:13.823 } 00:15:13.823 }, 00:15:13.823 { 00:15:13.823 "method": "bdev_malloc_create", 00:15:13.823 "params": { 00:15:13.823 "name": "malloc0", 00:15:13.823 "num_blocks": 8192, 00:15:13.823 "block_size": 4096, 00:15:13.823 "physical_block_size": 4096, 00:15:13.823 "uuid": "29db7b8e-9980-49f1-9d05-f8e7e94e68cc", 00:15:13.823 "optimal_io_boundary": 0 00:15:13.823 } 00:15:13.823 }, 00:15:13.823 { 00:15:13.823 "method": "bdev_wait_for_examine" 00:15:13.823 } 00:15:13.823 ] 00:15:13.823 }, 00:15:13.823 { 00:15:13.823 "subsystem": "scsi", 00:15:13.823 "config": null 00:15:13.823 }, 00:15:13.823 { 00:15:13.823 "subsystem": "scheduler", 00:15:13.823 "config": [ 00:15:13.823 { 00:15:13.823 "method": "framework_set_scheduler", 00:15:13.823 "params": { 00:15:13.823 "name": "static" 00:15:13.823 } 00:15:13.823 } 00:15:13.823 ] 00:15:13.823 }, 00:15:13.823 { 00:15:13.823 "subsystem": "vhost_scsi", 00:15:13.823 "config": [] 00:15:13.823 }, 00:15:13.823 { 00:15:13.823 "subsystem": "vhost_blk", 00:15:13.823 "config": [] 00:15:13.823 }, 00:15:13.823 { 00:15:13.823 "subsystem": "ublk", 00:15:13.823 "config": [ 00:15:13.823 { 00:15:13.823 "method": "ublk_create_target", 00:15:13.823 "params": { 00:15:13.823 "cpumask": "1" 00:15:13.823 } 00:15:13.823 }, 00:15:13.823 { 00:15:13.823 "method": "ublk_start_disk", 00:15:13.823 "params": { 00:15:13.823 "bdev_name": "malloc0", 00:15:13.823 "ublk_id": 0, 00:15:13.823 "num_queues": 1, 00:15:13.823 "queue_depth": 128 00:15:13.823 } 00:15:13.823 } 00:15:13.823 ] 00:15:13.823 }, 00:15:13.823 { 00:15:13.823 "subsystem": "nbd", 00:15:13.823 "config": [] 00:15:13.823 }, 00:15:13.823 { 00:15:13.823 "subsystem": "nvmf", 00:15:13.823 "config": [ 00:15:13.823 { 00:15:13.823 "method": "nvmf_set_config", 00:15:13.823 "params": { 00:15:13.823 "discovery_filter": "match_any", 00:15:13.823 "admin_cmd_passthru": { 00:15:13.823 "identify_ctrlr": false 00:15:13.823 } 00:15:13.823 } 00:15:13.823 }, 00:15:13.823 { 00:15:13.823 "method": "nvmf_set_max_subsystems", 00:15:13.823 "params": { 00:15:13.823 "max_subsystems": 1024 00:15:13.823 } 00:15:13.823 }, 00:15:13.823 { 00:15:13.823 "method": "nvmf_set_crdt", 00:15:13.823 "params": { 00:15:13.823 "crdt1": 0, 00:15:13.823 "crdt2": 0, 00:15:13.823 "crdt3": 0 00:15:13.823 } 00:15:13.823 } 00:15:13.823 ] 00:15:13.823 }, 00:15:13.823 { 00:15:13.823 "subsystem": "iscsi", 00:15:13.823 "config": [ 00:15:13.823 { 00:15:13.823 "method": "iscsi_set_options", 00:15:13.823 "params": { 00:15:13.823 "node_base": "iqn.2016-06.io.spdk", 00:15:13.823 "max_sessions": 128, 00:15:13.823 "max_connections_per_session": 2, 00:15:13.823 "max_queue_depth": 64, 00:15:13.823 "default_time2wait": 2, 00:15:13.823 "default_time2retain": 20, 00:15:13.823 "first_burst_length": 8192, 00:15:13.823 "immediate_data": true, 00:15:13.823 "allow_duplicated_isid": false, 00:15:13.823 "error_recovery_level": 0, 00:15:13.823 "nop_timeout": 60, 00:15:13.823 "nop_in_interval": 30, 00:15:13.823 "disable_chap": false, 00:15:13.823 "require_chap": false, 00:15:13.823 "mutual_chap": false, 00:15:13.823 "chap_group": 0, 00:15:13.823 "max_large_datain_per_connection": 64, 00:15:13.823 "max_r2t_per_connection": 4, 00:15:13.823 "pdu_pool_size": 36864, 00:15:13.823 "immediate_data_pool_size": 16384, 00:15:13.823 "data_out_pool_size": 2048 00:15:13.823 } 00:15:13.823 } 00:15:13.823 ] 00:15:13.823 } 00:15:13.823 ] 00:15:13.823 }' 00:15:13.823 18:33:13 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:15:13.823 18:33:13 ublk.test_save_ublk_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:13.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:13.823 18:33:13 ublk.test_save_ublk_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:13.823 18:33:13 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:13.823 18:33:13 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:13.823 [2024-07-23 18:33:13.744543] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:15:13.823 [2024-07-23 18:33:13.744700] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86861 ] 00:15:14.083 [2024-07-23 18:33:13.892877] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:14.083 [2024-07-23 18:33:13.970346] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:14.651 [2024-07-23 18:33:14.420595] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:14.651 [2024-07-23 18:33:14.420925] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:14.651 [2024-07-23 18:33:14.428772] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:15:14.651 [2024-07-23 18:33:14.428866] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:15:14.651 [2024-07-23 18:33:14.428877] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:14.651 [2024-07-23 18:33:14.428885] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:14.652 [2024-07-23 18:33:14.437713] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:14.652 [2024-07-23 18:33:14.437733] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:14.652 [2024-07-23 18:33:14.443640] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:14.652 [2024-07-23 18:33:14.443732] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:14.652 [2024-07-23 18:33:14.460606] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:14.652 18:33:14 ublk.test_save_ublk_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:14.652 18:33:14 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # return 0 00:15:14.652 18:33:14 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:15:14.652 18:33:14 ublk.test_save_ublk_config -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.652 18:33:14 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:15:14.652 18:33:14 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:14.652 18:33:14 ublk.test_save_ublk_config -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.652 18:33:14 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:15:14.652 18:33:14 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:15:14.652 18:33:14 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 86861 00:15:14.652 18:33:14 ublk.test_save_ublk_config -- common/autotest_common.sh@946 -- # '[' -z 86861 ']' 00:15:14.652 18:33:14 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # kill -0 86861 00:15:14.652 18:33:14 ublk.test_save_ublk_config -- common/autotest_common.sh@951 -- # uname 00:15:14.652 18:33:14 ublk.test_save_ublk_config -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:14.652 18:33:14 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 86861 00:15:14.652 18:33:14 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:14.652 18:33:14 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:14.652 killing process with pid 86861 00:15:14.652 18:33:14 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # echo 'killing process with pid 86861' 00:15:14.652 18:33:14 ublk.test_save_ublk_config -- common/autotest_common.sh@965 -- # kill 86861 00:15:14.652 18:33:14 ublk.test_save_ublk_config -- common/autotest_common.sh@970 -- # wait 86861 00:15:15.221 [2024-07-23 18:33:15.119658] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:15.221 [2024-07-23 18:33:15.150670] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:15.221 [2024-07-23 18:33:15.154587] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:15.221 [2024-07-23 18:33:15.165590] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:15.221 [2024-07-23 18:33:15.165653] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:15.221 [2024-07-23 18:33:15.165673] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:15.221 [2024-07-23 18:33:15.165701] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:15:15.221 [2024-07-23 18:33:15.165847] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:15:15.787 18:33:15 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:15:15.787 00:15:15.787 real 0m4.044s 00:15:15.787 user 0m2.789s 00:15:15.787 sys 0m1.878s 00:15:15.787 18:33:15 ublk.test_save_ublk_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:15.787 18:33:15 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:15.787 ************************************ 00:15:15.787 END TEST test_save_ublk_config 00:15:15.787 ************************************ 00:15:15.787 18:33:15 ublk -- ublk/ublk.sh@139 -- # spdk_pid=86916 00:15:15.787 18:33:15 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:15:15.787 18:33:15 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:15.787 18:33:15 ublk -- ublk/ublk.sh@141 -- # waitforlisten 86916 00:15:15.787 18:33:15 ublk -- common/autotest_common.sh@827 -- # '[' -z 86916 ']' 00:15:15.787 18:33:15 ublk -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:15.787 18:33:15 ublk -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:15.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:15.787 18:33:15 ublk -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:15.787 18:33:15 ublk -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:15.787 18:33:15 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:15.787 [2024-07-23 18:33:15.696748] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:15:15.787 [2024-07-23 18:33:15.696873] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86916 ] 00:15:16.047 [2024-07-23 18:33:15.843768] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:16.047 [2024-07-23 18:33:15.912714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:16.047 [2024-07-23 18:33:15.912827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:16.616 18:33:16 ublk -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:16.616 18:33:16 ublk -- common/autotest_common.sh@860 -- # return 0 00:15:16.616 18:33:16 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:15:16.616 18:33:16 ublk -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:16.616 18:33:16 ublk -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:16.616 18:33:16 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:16.616 ************************************ 00:15:16.616 START TEST test_create_ublk 00:15:16.616 ************************************ 00:15:16.616 18:33:16 ublk.test_create_ublk -- common/autotest_common.sh@1121 -- # test_create_ublk 00:15:16.616 18:33:16 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:15:16.616 18:33:16 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.616 18:33:16 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:16.616 [2024-07-23 18:33:16.495614] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:16.616 [2024-07-23 18:33:16.501752] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:16.616 18:33:16 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.616 18:33:16 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:15:16.616 18:33:16 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:15:16.616 18:33:16 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.616 18:33:16 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:16.616 18:33:16 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.616 18:33:16 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:15:16.616 18:33:16 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:15:16.616 18:33:16 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.616 18:33:16 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:16.616 [2024-07-23 18:33:16.623763] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:15:16.616 [2024-07-23 18:33:16.624295] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:15:16.616 [2024-07-23 18:33:16.624318] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:16.616 [2024-07-23 18:33:16.624326] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:16.616 [2024-07-23 18:33:16.633006] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:16.616 [2024-07-23 18:33:16.633030] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:16.616 [2024-07-23 18:33:16.639622] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:16.616 [2024-07-23 18:33:16.647647] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:16.616 [2024-07-23 18:33:16.663615] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:16.616 18:33:16 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.877 18:33:16 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:15:16.877 18:33:16 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:15:16.877 18:33:16 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:15:16.877 18:33:16 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.877 18:33:16 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:16.877 18:33:16 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.877 18:33:16 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:15:16.877 { 00:15:16.877 "ublk_device": "/dev/ublkb0", 00:15:16.877 "id": 0, 00:15:16.877 "queue_depth": 512, 00:15:16.877 "num_queues": 4, 00:15:16.877 "bdev_name": "Malloc0" 00:15:16.877 } 00:15:16.877 ]' 00:15:16.877 18:33:16 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:15:16.877 18:33:16 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:15:16.877 18:33:16 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:15:16.877 18:33:16 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:15:16.877 18:33:16 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:15:16.877 18:33:16 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:15:16.877 18:33:16 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:15:16.877 18:33:16 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:15:16.877 18:33:16 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:15:16.877 18:33:16 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:15:16.877 18:33:16 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:15:16.877 18:33:16 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:15:16.877 18:33:16 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:15:16.877 18:33:16 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:15:16.877 18:33:16 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:15:16.877 18:33:16 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:15:16.877 18:33:16 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:15:16.877 18:33:16 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:15:16.877 18:33:16 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:15:16.877 18:33:16 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:15:16.877 18:33:16 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:15:16.877 18:33:16 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:15:17.137 fio: verification read phase will never start because write phase uses all of runtime 00:15:17.137 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:15:17.137 fio-3.35 00:15:17.137 Starting 1 process 00:15:27.119 00:15:27.119 fio_test: (groupid=0, jobs=1): err= 0: pid=86956: Tue Jul 23 18:33:27 2024 00:15:27.119 write: IOPS=17.8k, BW=69.4MiB/s (72.7MB/s)(694MiB/10000msec); 0 zone resets 00:15:27.119 clat (usec): min=31, max=4005, avg=55.54, stdev=88.18 00:15:27.119 lat (usec): min=32, max=4005, avg=55.95, stdev=88.19 00:15:27.119 clat percentiles (usec): 00:15:27.119 | 1.00th=[ 37], 5.00th=[ 48], 10.00th=[ 49], 20.00th=[ 50], 00:15:27.119 | 30.00th=[ 51], 40.00th=[ 52], 50.00th=[ 52], 60.00th=[ 53], 00:15:27.119 | 70.00th=[ 54], 80.00th=[ 55], 90.00th=[ 57], 95.00th=[ 60], 00:15:27.119 | 99.00th=[ 71], 99.50th=[ 77], 99.90th=[ 1713], 99.95th=[ 2671], 00:15:27.119 | 99.99th=[ 3392] 00:15:27.119 bw ( KiB/s): min=69960, max=81000, per=100.00%, avg=71168.53, stdev=2437.11, samples=19 00:15:27.119 iops : min=17490, max=20250, avg=17792.11, stdev=609.28, samples=19 00:15:27.119 lat (usec) : 50=21.37%, 100=78.44%, 250=0.02%, 500=0.01%, 750=0.01% 00:15:27.119 lat (usec) : 1000=0.02% 00:15:27.119 lat (msec) : 2=0.06%, 4=0.08%, 10=0.01% 00:15:27.119 cpu : usr=2.18%, sys=8.78%, ctx=177564, majf=0, minf=796 00:15:27.119 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:27.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:27.119 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:27.119 issued rwts: total=0,177564,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:27.119 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:27.119 00:15:27.119 Run status group 0 (all jobs): 00:15:27.119 WRITE: bw=69.4MiB/s (72.7MB/s), 69.4MiB/s-69.4MiB/s (72.7MB/s-72.7MB/s), io=694MiB (727MB), run=10000-10000msec 00:15:27.119 00:15:27.119 Disk stats (read/write): 00:15:27.119 ublkb0: ios=0/175741, merge=0/0, ticks=0/8868, in_queue=8869, util=99.14% 00:15:27.379 18:33:27 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:15:27.379 18:33:27 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.379 18:33:27 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:27.379 [2024-07-23 18:33:27.180021] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:27.379 [2024-07-23 18:33:27.216606] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:27.379 [2024-07-23 18:33:27.217408] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:27.379 [2024-07-23 18:33:27.222597] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:27.379 [2024-07-23 18:33:27.222948] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:27.379 [2024-07-23 18:33:27.222967] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:27.379 18:33:27 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.379 18:33:27 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:15:27.379 18:33:27 ublk.test_create_ublk -- common/autotest_common.sh@648 -- # local es=0 00:15:27.379 18:33:27 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:15:27.379 18:33:27 ublk.test_create_ublk -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:15:27.379 18:33:27 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:27.379 18:33:27 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:15:27.379 18:33:27 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:27.379 18:33:27 ublk.test_create_ublk -- common/autotest_common.sh@651 -- # rpc_cmd ublk_stop_disk 0 00:15:27.379 18:33:27 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.379 18:33:27 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:27.379 [2024-07-23 18:33:27.232715] ublk.c:1071:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:15:27.379 request: 00:15:27.379 { 00:15:27.379 "ublk_id": 0, 00:15:27.379 "method": "ublk_stop_disk", 00:15:27.379 "req_id": 1 00:15:27.379 } 00:15:27.379 Got JSON-RPC error response 00:15:27.379 response: 00:15:27.379 { 00:15:27.379 "code": -19, 00:15:27.379 "message": "No such device" 00:15:27.379 } 00:15:27.379 18:33:27 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:15:27.379 18:33:27 ublk.test_create_ublk -- common/autotest_common.sh@651 -- # es=1 00:15:27.379 18:33:27 ublk.test_create_ublk -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:27.379 18:33:27 ublk.test_create_ublk -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:27.380 18:33:27 ublk.test_create_ublk -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:27.380 18:33:27 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:15:27.380 18:33:27 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.380 18:33:27 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:27.380 [2024-07-23 18:33:27.248711] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:15:27.380 [2024-07-23 18:33:27.251531] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:15:27.380 [2024-07-23 18:33:27.251587] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:15:27.380 18:33:27 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.380 18:33:27 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:15:27.380 18:33:27 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.380 18:33:27 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:27.380 18:33:27 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.380 18:33:27 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:15:27.380 18:33:27 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:15:27.380 18:33:27 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.380 18:33:27 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:27.380 18:33:27 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.380 18:33:27 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:15:27.380 18:33:27 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:15:27.380 18:33:27 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:15:27.380 18:33:27 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:15:27.380 18:33:27 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.380 18:33:27 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:27.380 18:33:27 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.380 18:33:27 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:15:27.380 18:33:27 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:15:27.639 18:33:27 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:15:27.639 00:15:27.639 real 0m10.966s 00:15:27.639 user 0m0.608s 00:15:27.639 sys 0m1.004s 00:15:27.639 18:33:27 ublk.test_create_ublk -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:27.639 18:33:27 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:27.639 ************************************ 00:15:27.639 END TEST test_create_ublk 00:15:27.639 ************************************ 00:15:27.639 18:33:27 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:15:27.639 18:33:27 ublk -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:27.639 18:33:27 ublk -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:27.639 18:33:27 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:27.639 ************************************ 00:15:27.639 START TEST test_create_multi_ublk 00:15:27.639 ************************************ 00:15:27.639 18:33:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@1121 -- # test_create_multi_ublk 00:15:27.639 18:33:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:15:27.639 18:33:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.639 18:33:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:27.639 [2024-07-23 18:33:27.527597] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:27.639 [2024-07-23 18:33:27.529465] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:27.639 18:33:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.639 18:33:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:15:27.639 18:33:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:15:27.639 18:33:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:27.639 18:33:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:15:27.639 18:33:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.639 18:33:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:27.639 18:33:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.639 18:33:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:15:27.639 18:33:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:15:27.639 18:33:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.639 18:33:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:27.639 [2024-07-23 18:33:27.663773] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:15:27.639 [2024-07-23 18:33:27.664215] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:15:27.639 [2024-07-23 18:33:27.664232] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:27.639 [2024-07-23 18:33:27.664243] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:27.639 [2024-07-23 18:33:27.671631] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:27.639 [2024-07-23 18:33:27.671659] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:27.639 [2024-07-23 18:33:27.679614] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:27.639 [2024-07-23 18:33:27.680209] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:27.639 [2024-07-23 18:33:27.690726] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:27.899 18:33:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.899 18:33:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:15:27.899 18:33:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:27.899 18:33:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:15:27.899 18:33:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.899 18:33:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:27.899 18:33:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.899 18:33:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:15:27.899 18:33:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:15:27.899 18:33:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.899 18:33:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:27.899 [2024-07-23 18:33:27.815779] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:15:27.899 [2024-07-23 18:33:27.816209] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:15:27.899 [2024-07-23 18:33:27.816229] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:15:27.899 [2024-07-23 18:33:27.816237] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:15:27.899 [2024-07-23 18:33:27.822108] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:27.899 [2024-07-23 18:33:27.822132] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:27.899 [2024-07-23 18:33:27.831598] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:27.899 [2024-07-23 18:33:27.832192] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:15:27.899 [2024-07-23 18:33:27.837116] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:15:27.899 18:33:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.899 18:33:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:15:27.899 18:33:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:27.899 18:33:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:15:27.899 18:33:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.899 18:33:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:28.159 18:33:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.159 18:33:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:15:28.159 18:33:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:15:28.159 18:33:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.159 18:33:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:28.159 [2024-07-23 18:33:27.970742] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:15:28.159 [2024-07-23 18:33:27.971192] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:15:28.159 [2024-07-23 18:33:27.971209] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:15:28.159 [2024-07-23 18:33:27.971295] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:15:28.159 [2024-07-23 18:33:27.978622] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:28.159 [2024-07-23 18:33:27.978650] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:28.159 [2024-07-23 18:33:27.986589] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:28.159 [2024-07-23 18:33:27.987205] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:15:28.159 [2024-07-23 18:33:27.995651] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:15:28.159 18:33:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.159 18:33:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:15:28.159 18:33:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:28.159 18:33:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:15:28.159 18:33:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.159 18:33:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:28.159 18:33:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.159 18:33:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:15:28.159 18:33:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:15:28.159 18:33:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.159 18:33:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:28.159 [2024-07-23 18:33:28.128727] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:15:28.159 [2024-07-23 18:33:28.129174] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:15:28.159 [2024-07-23 18:33:28.129194] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:15:28.159 [2024-07-23 18:33:28.129202] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:15:28.159 [2024-07-23 18:33:28.138010] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:28.159 [2024-07-23 18:33:28.138033] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:28.159 [2024-07-23 18:33:28.144615] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:28.159 [2024-07-23 18:33:28.145234] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:15:28.159 [2024-07-23 18:33:28.153668] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:15:28.159 18:33:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.159 18:33:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:15:28.159 18:33:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:15:28.159 18:33:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.159 18:33:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:28.159 18:33:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.159 18:33:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:15:28.159 { 00:15:28.159 "ublk_device": "/dev/ublkb0", 00:15:28.159 "id": 0, 00:15:28.159 "queue_depth": 512, 00:15:28.159 "num_queues": 4, 00:15:28.159 "bdev_name": "Malloc0" 00:15:28.159 }, 00:15:28.159 { 00:15:28.159 "ublk_device": "/dev/ublkb1", 00:15:28.159 "id": 1, 00:15:28.159 "queue_depth": 512, 00:15:28.159 "num_queues": 4, 00:15:28.159 "bdev_name": "Malloc1" 00:15:28.159 }, 00:15:28.159 { 00:15:28.159 "ublk_device": "/dev/ublkb2", 00:15:28.159 "id": 2, 00:15:28.159 "queue_depth": 512, 00:15:28.159 "num_queues": 4, 00:15:28.159 "bdev_name": "Malloc2" 00:15:28.159 }, 00:15:28.159 { 00:15:28.159 "ublk_device": "/dev/ublkb3", 00:15:28.159 "id": 3, 00:15:28.159 "queue_depth": 512, 00:15:28.159 "num_queues": 4, 00:15:28.159 "bdev_name": "Malloc3" 00:15:28.159 } 00:15:28.159 ]' 00:15:28.159 18:33:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:15:28.159 18:33:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:28.159 18:33:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:15:28.419 18:33:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:15:28.419 18:33:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:15:28.419 18:33:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:15:28.419 18:33:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:15:28.419 18:33:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:15:28.419 18:33:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:15:28.419 18:33:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:15:28.419 18:33:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:15:28.419 18:33:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:15:28.419 18:33:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:28.419 18:33:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:15:28.419 18:33:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:15:28.419 18:33:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:15:28.679 18:33:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:15:28.679 18:33:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:15:28.679 18:33:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:15:28.679 18:33:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:15:28.679 18:33:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:15:28.679 18:33:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:15:28.679 18:33:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:15:28.679 18:33:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:28.679 18:33:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:15:28.679 18:33:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:15:28.679 18:33:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:15:28.679 18:33:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:15:28.679 18:33:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:15:28.939 18:33:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:15:28.939 18:33:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:15:28.939 18:33:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:15:28.939 18:33:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:15:28.939 18:33:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:15:28.939 18:33:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:28.939 18:33:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:15:28.939 18:33:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:15:28.939 18:33:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:15:28.939 18:33:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:15:28.939 18:33:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:15:29.199 18:33:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:15:29.199 18:33:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:15:29.199 18:33:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:15:29.199 18:33:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:15:29.199 18:33:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:15:29.199 18:33:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:15:29.199 18:33:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:15:29.199 18:33:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:29.199 18:33:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:15:29.199 18:33:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.199 18:33:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:29.199 [2024-07-23 18:33:29.100726] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:29.199 [2024-07-23 18:33:29.151659] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:29.199 [2024-07-23 18:33:29.152902] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:29.199 [2024-07-23 18:33:29.162653] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:29.199 [2024-07-23 18:33:29.162959] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:29.199 [2024-07-23 18:33:29.162976] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:29.199 18:33:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.199 18:33:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:29.199 18:33:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:15:29.199 18:33:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.199 18:33:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:29.199 [2024-07-23 18:33:29.170673] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:15:29.199 [2024-07-23 18:33:29.202672] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:29.199 [2024-07-23 18:33:29.206927] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:15:29.199 [2024-07-23 18:33:29.214622] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:29.199 [2024-07-23 18:33:29.214894] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:15:29.199 [2024-07-23 18:33:29.214912] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:15:29.199 18:33:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.199 18:33:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:29.199 18:33:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:15:29.199 18:33:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.199 18:33:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:29.199 [2024-07-23 18:33:29.233687] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:15:29.459 [2024-07-23 18:33:29.264652] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:29.459 [2024-07-23 18:33:29.268959] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:15:29.459 [2024-07-23 18:33:29.272647] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:29.459 [2024-07-23 18:33:29.272906] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:15:29.459 [2024-07-23 18:33:29.272924] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:15:29.459 18:33:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.459 18:33:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:29.459 18:33:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:15:29.459 18:33:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.459 18:33:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:29.459 [2024-07-23 18:33:29.282808] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:15:29.459 [2024-07-23 18:33:29.313112] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:29.459 [2024-07-23 18:33:29.315947] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:15:29.459 [2024-07-23 18:33:29.322590] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:29.459 [2024-07-23 18:33:29.322875] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:15:29.459 [2024-07-23 18:33:29.322890] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:15:29.459 18:33:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.459 18:33:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:15:29.459 [2024-07-23 18:33:29.495749] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:15:29.459 [2024-07-23 18:33:29.497254] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:15:29.459 [2024-07-23 18:33:29.497300] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:15:29.719 18:33:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:15:29.719 18:33:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:29.719 18:33:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:15:29.719 18:33:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.719 18:33:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:29.719 18:33:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.719 18:33:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:29.719 18:33:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:15:29.719 18:33:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.719 18:33:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:29.719 18:33:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.719 18:33:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:29.719 18:33:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:15:29.719 18:33:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.719 18:33:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:29.979 18:33:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.979 18:33:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:29.979 18:33:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:15:29.979 18:33:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.979 18:33:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:29.979 18:33:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.979 18:33:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:15:29.979 18:33:29 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:15:29.979 18:33:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.979 18:33:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:29.979 18:33:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.979 18:33:29 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:15:29.979 18:33:29 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:15:29.979 18:33:29 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:15:29.979 18:33:29 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:15:29.979 18:33:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.979 18:33:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:29.979 18:33:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.979 18:33:29 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:15:29.979 18:33:29 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:15:29.979 18:33:29 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:15:29.979 00:15:29.979 real 0m2.480s 00:15:29.979 user 0m1.086s 00:15:29.979 sys 0m0.205s 00:15:29.979 18:33:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:29.979 18:33:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:29.979 ************************************ 00:15:29.979 END TEST test_create_multi_ublk 00:15:29.979 ************************************ 00:15:30.239 18:33:30 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:15:30.239 18:33:30 ublk -- ublk/ublk.sh@147 -- # cleanup 00:15:30.239 18:33:30 ublk -- ublk/ublk.sh@130 -- # killprocess 86916 00:15:30.239 18:33:30 ublk -- common/autotest_common.sh@946 -- # '[' -z 86916 ']' 00:15:30.239 18:33:30 ublk -- common/autotest_common.sh@950 -- # kill -0 86916 00:15:30.239 18:33:30 ublk -- common/autotest_common.sh@951 -- # uname 00:15:30.239 18:33:30 ublk -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:30.239 18:33:30 ublk -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 86916 00:15:30.239 18:33:30 ublk -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:30.239 18:33:30 ublk -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:30.239 killing process with pid 86916 00:15:30.239 18:33:30 ublk -- common/autotest_common.sh@964 -- # echo 'killing process with pid 86916' 00:15:30.239 18:33:30 ublk -- common/autotest_common.sh@965 -- # kill 86916 00:15:30.239 18:33:30 ublk -- common/autotest_common.sh@970 -- # wait 86916 00:15:30.512 [2024-07-23 18:33:30.340061] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:15:30.512 [2024-07-23 18:33:30.340128] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:15:30.815 00:15:30.815 real 0m19.352s 00:15:30.815 user 0m30.266s 00:15:30.815 sys 0m7.526s 00:15:30.815 18:33:30 ublk -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:30.815 18:33:30 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:30.815 ************************************ 00:15:30.815 END TEST ublk 00:15:30.815 ************************************ 00:15:30.815 18:33:30 -- spdk/autotest.sh@252 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:15:30.815 18:33:30 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:30.815 18:33:30 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:30.815 18:33:30 -- common/autotest_common.sh@10 -- # set +x 00:15:30.815 ************************************ 00:15:30.815 START TEST ublk_recovery 00:15:30.815 ************************************ 00:15:30.815 18:33:30 ublk_recovery -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:15:31.075 * Looking for test storage... 00:15:31.075 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:15:31.075 18:33:30 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:15:31.075 18:33:30 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:15:31.075 18:33:30 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:15:31.075 18:33:30 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:15:31.075 18:33:30 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:15:31.075 18:33:30 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:15:31.075 18:33:30 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:15:31.075 18:33:30 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:15:31.075 18:33:30 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:15:31.075 18:33:30 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:15:31.075 18:33:30 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:15:31.075 18:33:30 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=87262 00:15:31.075 18:33:30 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:31.075 18:33:30 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 87262 00:15:31.075 18:33:30 ublk_recovery -- common/autotest_common.sh@827 -- # '[' -z 87262 ']' 00:15:31.075 18:33:30 ublk_recovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:31.075 18:33:30 ublk_recovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:31.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:31.075 18:33:30 ublk_recovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:31.075 18:33:30 ublk_recovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:31.075 18:33:30 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:31.075 [2024-07-23 18:33:31.016949] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:15:31.075 [2024-07-23 18:33:31.017158] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87262 ] 00:15:31.336 [2024-07-23 18:33:31.168659] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:31.336 [2024-07-23 18:33:31.238475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:31.336 [2024-07-23 18:33:31.238636] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:31.907 18:33:31 ublk_recovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:31.907 18:33:31 ublk_recovery -- common/autotest_common.sh@860 -- # return 0 00:15:31.907 18:33:31 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:15:31.907 18:33:31 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.907 18:33:31 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:31.907 [2024-07-23 18:33:31.780604] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:31.907 [2024-07-23 18:33:31.782778] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:31.907 18:33:31 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.907 18:33:31 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:15:31.907 18:33:31 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.907 18:33:31 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:31.907 malloc0 00:15:31.907 18:33:31 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.907 18:33:31 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:15:31.907 18:33:31 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.907 18:33:31 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:31.907 [2024-07-23 18:33:31.860752] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:15:31.907 [2024-07-23 18:33:31.860857] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:15:31.907 [2024-07-23 18:33:31.860873] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:15:31.907 [2024-07-23 18:33:31.860883] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:15:31.907 [2024-07-23 18:33:31.869739] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:31.907 [2024-07-23 18:33:31.869757] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:31.907 [2024-07-23 18:33:31.876621] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:31.907 [2024-07-23 18:33:31.876771] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:15:31.907 [2024-07-23 18:33:31.893624] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:15:31.908 1 00:15:31.908 18:33:31 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.908 18:33:31 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:15:33.290 18:33:32 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=87295 00:15:33.290 18:33:32 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:15:33.290 18:33:32 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:15:33.290 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:33.290 fio-3.35 00:15:33.290 Starting 1 process 00:15:38.572 18:33:37 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 87262 00:15:38.572 18:33:37 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:15:43.853 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 87262 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:15:43.853 18:33:42 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=87399 00:15:43.853 18:33:42 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:15:43.853 18:33:42 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:43.853 18:33:42 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 87399 00:15:43.853 18:33:42 ublk_recovery -- common/autotest_common.sh@827 -- # '[' -z 87399 ']' 00:15:43.853 18:33:42 ublk_recovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:43.853 18:33:42 ublk_recovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:43.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:43.853 18:33:42 ublk_recovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:43.853 18:33:42 ublk_recovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:43.853 18:33:42 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:43.853 [2024-07-23 18:33:43.019147] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:15:43.853 [2024-07-23 18:33:43.019281] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87399 ] 00:15:43.853 [2024-07-23 18:33:43.165623] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:43.853 [2024-07-23 18:33:43.233936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:43.853 [2024-07-23 18:33:43.234060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:43.853 18:33:43 ublk_recovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:43.853 18:33:43 ublk_recovery -- common/autotest_common.sh@860 -- # return 0 00:15:43.853 18:33:43 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:15:43.853 18:33:43 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.853 18:33:43 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:43.853 [2024-07-23 18:33:43.794596] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:43.853 [2024-07-23 18:33:43.796762] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:43.853 18:33:43 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.853 18:33:43 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:15:43.853 18:33:43 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.853 18:33:43 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:43.853 malloc0 00:15:43.853 18:33:43 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.853 18:33:43 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:15:43.853 18:33:43 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.853 18:33:43 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:43.853 [2024-07-23 18:33:43.858737] ublk.c:2095:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:15:43.853 [2024-07-23 18:33:43.858796] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:15:43.853 [2024-07-23 18:33:43.858810] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:15:43.853 [2024-07-23 18:33:43.866639] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:15:43.853 [2024-07-23 18:33:43.866665] ublk.c:2024:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:15:43.853 [2024-07-23 18:33:43.866744] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:15:43.853 1 00:15:43.853 18:33:43 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.853 18:33:43 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 87295 00:15:43.853 [2024-07-23 18:33:43.874605] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:15:43.853 [2024-07-23 18:33:43.878334] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:15:43.853 [2024-07-23 18:33:43.882776] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:15:43.853 [2024-07-23 18:33:43.882796] ublk.c: 378:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:16:40.102 00:16:40.102 fio_test: (groupid=0, jobs=1): err= 0: pid=87298: Tue Jul 23 18:34:33 2024 00:16:40.102 read: IOPS=19.6k, BW=76.8MiB/s (80.5MB/s)(4605MiB/60002msec) 00:16:40.102 slat (nsec): min=953, max=1708.9k, avg=8375.67, stdev=3812.86 00:16:40.102 clat (usec): min=1154, max=5980.8k, avg=3216.35, stdev=45055.83 00:16:40.102 lat (usec): min=1169, max=5980.8k, avg=3224.72, stdev=45055.83 00:16:40.102 clat percentiles (usec): 00:16:40.102 | 1.00th=[ 2114], 5.00th=[ 2245], 10.00th=[ 2376], 20.00th=[ 2671], 00:16:40.102 | 30.00th=[ 2737], 40.00th=[ 2769], 50.00th=[ 2802], 60.00th=[ 2868], 00:16:40.102 | 70.00th=[ 2900], 80.00th=[ 2933], 90.00th=[ 3064], 95.00th=[ 3851], 00:16:40.102 | 99.00th=[ 5276], 99.50th=[ 6128], 99.90th=[ 7439], 99.95th=[ 8717], 00:16:40.102 | 99.99th=[14484] 00:16:40.102 bw ( KiB/s): min=22760, max=105192, per=100.00%, avg=86645.62, stdev=9802.86, samples=108 00:16:40.102 iops : min= 5690, max=26298, avg=21661.37, stdev=2450.74, samples=108 00:16:40.102 write: IOPS=19.6k, BW=76.7MiB/s (80.4MB/s)(4603MiB/60002msec); 0 zone resets 00:16:40.102 slat (nsec): min=1085, max=1290.4k, avg=8496.18, stdev=4037.04 00:16:40.102 clat (usec): min=918, max=5980.9k, avg=3280.95, stdev=43004.39 00:16:40.102 lat (usec): min=927, max=5980.9k, avg=3289.44, stdev=43004.39 00:16:40.102 clat percentiles (usec): 00:16:40.102 | 1.00th=[ 2147], 5.00th=[ 2343], 10.00th=[ 2442], 20.00th=[ 2737], 00:16:40.102 | 30.00th=[ 2835], 40.00th=[ 2900], 50.00th=[ 2933], 60.00th=[ 2966], 00:16:40.102 | 70.00th=[ 2999], 80.00th=[ 3064], 90.00th=[ 3163], 95.00th=[ 3851], 00:16:40.102 | 99.00th=[ 5342], 99.50th=[ 6194], 99.90th=[ 7504], 99.95th=[ 8848], 00:16:40.102 | 99.99th=[14353] 00:16:40.102 bw ( KiB/s): min=23880, max=105256, per=100.00%, avg=86592.02, stdev=9618.12, samples=108 00:16:40.102 iops : min= 5970, max=26314, avg=21647.97, stdev=2404.52, samples=108 00:16:40.103 lat (usec) : 1000=0.01% 00:16:40.103 lat (msec) : 2=0.43%, 4=95.26%, 10=4.29%, 20=0.02%, >=2000=0.01% 00:16:40.103 cpu : usr=9.64%, sys=33.54%, ctx=103577, majf=0, minf=13 00:16:40.103 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:16:40.103 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:40.103 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:40.103 issued rwts: total=1178981,1178257,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:40.103 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:40.103 00:16:40.103 Run status group 0 (all jobs): 00:16:40.103 READ: bw=76.8MiB/s (80.5MB/s), 76.8MiB/s-76.8MiB/s (80.5MB/s-80.5MB/s), io=4605MiB (4829MB), run=60002-60002msec 00:16:40.103 WRITE: bw=76.7MiB/s (80.4MB/s), 76.7MiB/s-76.7MiB/s (80.4MB/s-80.4MB/s), io=4603MiB (4826MB), run=60002-60002msec 00:16:40.103 00:16:40.103 Disk stats (read/write): 00:16:40.103 ublkb1: ios=1176663/1175987, merge=0/0, ticks=3673971/3615600, in_queue=7289572, util=99.95% 00:16:40.103 18:34:33 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:16:40.103 18:34:33 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.103 18:34:33 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:40.103 [2024-07-23 18:34:33.176241] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:16:40.103 [2024-07-23 18:34:33.217772] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:40.103 [2024-07-23 18:34:33.221649] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:16:40.103 [2024-07-23 18:34:33.232645] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:40.103 [2024-07-23 18:34:33.232857] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:16:40.103 [2024-07-23 18:34:33.232878] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:16:40.103 18:34:33 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.103 18:34:33 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:16:40.103 18:34:33 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.103 18:34:33 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:40.103 [2024-07-23 18:34:33.240761] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:16:40.103 [2024-07-23 18:34:33.248005] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:16:40.103 [2024-07-23 18:34:33.248054] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:40.103 18:34:33 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.103 18:34:33 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:16:40.103 18:34:33 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:16:40.103 18:34:33 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 87399 00:16:40.103 18:34:33 ublk_recovery -- common/autotest_common.sh@946 -- # '[' -z 87399 ']' 00:16:40.103 18:34:33 ublk_recovery -- common/autotest_common.sh@950 -- # kill -0 87399 00:16:40.103 18:34:33 ublk_recovery -- common/autotest_common.sh@951 -- # uname 00:16:40.103 18:34:33 ublk_recovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:40.103 18:34:33 ublk_recovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 87399 00:16:40.103 18:34:33 ublk_recovery -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:40.103 18:34:33 ublk_recovery -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:40.103 18:34:33 ublk_recovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 87399' 00:16:40.103 killing process with pid 87399 00:16:40.103 18:34:33 ublk_recovery -- common/autotest_common.sh@965 -- # kill 87399 00:16:40.103 18:34:33 ublk_recovery -- common/autotest_common.sh@970 -- # wait 87399 00:16:40.103 [2024-07-23 18:34:33.427039] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:16:40.103 [2024-07-23 18:34:33.427118] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:16:40.103 00:16:40.103 real 1m2.925s 00:16:40.103 user 1m45.424s 00:16:40.103 sys 0m35.591s 00:16:40.103 18:34:33 ublk_recovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:40.103 18:34:33 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:40.103 ************************************ 00:16:40.103 END TEST ublk_recovery 00:16:40.103 ************************************ 00:16:40.103 18:34:33 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:16:40.103 18:34:33 -- spdk/autotest.sh@260 -- # timing_exit lib 00:16:40.103 18:34:33 -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:40.103 18:34:33 -- common/autotest_common.sh@10 -- # set +x 00:16:40.103 18:34:33 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:16:40.103 18:34:33 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:16:40.103 18:34:33 -- spdk/autotest.sh@279 -- # '[' 0 -eq 1 ']' 00:16:40.103 18:34:33 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:16:40.103 18:34:33 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:16:40.103 18:34:33 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:16:40.103 18:34:33 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:16:40.103 18:34:33 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:16:40.103 18:34:33 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:16:40.103 18:34:33 -- spdk/autotest.sh@339 -- # '[' 1 -eq 1 ']' 00:16:40.103 18:34:33 -- spdk/autotest.sh@340 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:16:40.103 18:34:33 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:16:40.103 18:34:33 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:40.103 18:34:33 -- common/autotest_common.sh@10 -- # set +x 00:16:40.103 ************************************ 00:16:40.103 START TEST ftl 00:16:40.103 ************************************ 00:16:40.103 18:34:33 ftl -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:16:40.103 * Looking for test storage... 00:16:40.103 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:40.103 18:34:33 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:40.103 18:34:33 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:16:40.103 18:34:33 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:40.103 18:34:33 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:40.103 18:34:33 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:40.103 18:34:33 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:40.103 18:34:33 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:40.103 18:34:33 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:40.103 18:34:33 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:40.103 18:34:33 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:40.103 18:34:33 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:40.103 18:34:33 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:40.103 18:34:33 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:40.103 18:34:33 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:40.103 18:34:33 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:40.103 18:34:33 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:40.103 18:34:33 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:40.103 18:34:33 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:40.103 18:34:33 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:40.103 18:34:33 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:40.103 18:34:33 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:40.103 18:34:33 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:40.103 18:34:33 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:40.103 18:34:33 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:40.103 18:34:33 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:40.103 18:34:33 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:40.103 18:34:33 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:40.103 18:34:33 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:40.103 18:34:33 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:40.103 18:34:33 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:40.103 18:34:33 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:16:40.103 18:34:33 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:16:40.103 18:34:33 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:16:40.103 18:34:33 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:16:40.103 18:34:33 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:40.103 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:40.103 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:40.103 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:40.103 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:40.103 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:40.103 18:34:34 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:16:40.103 18:34:34 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=88179 00:16:40.103 18:34:34 ftl -- ftl/ftl.sh@38 -- # waitforlisten 88179 00:16:40.103 18:34:34 ftl -- common/autotest_common.sh@827 -- # '[' -z 88179 ']' 00:16:40.103 18:34:34 ftl -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:40.103 18:34:34 ftl -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:40.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:40.103 18:34:34 ftl -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:40.103 18:34:34 ftl -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:40.103 18:34:34 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:40.103 [2024-07-23 18:34:34.839972] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:16:40.103 [2024-07-23 18:34:34.840094] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88179 ] 00:16:40.103 [2024-07-23 18:34:34.988853] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:40.103 [2024-07-23 18:34:35.036532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:40.103 18:34:35 ftl -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:40.103 18:34:35 ftl -- common/autotest_common.sh@860 -- # return 0 00:16:40.103 18:34:35 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:16:40.103 18:34:35 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:16:40.103 18:34:36 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:16:40.104 18:34:36 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:16:40.104 18:34:36 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:16:40.104 18:34:36 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:16:40.104 18:34:36 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:16:40.104 18:34:36 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:16:40.104 18:34:36 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:16:40.104 18:34:36 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:16:40.104 18:34:36 ftl -- ftl/ftl.sh@50 -- # break 00:16:40.104 18:34:36 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:16:40.104 18:34:36 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:16:40.104 18:34:36 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:16:40.104 18:34:36 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:16:40.104 18:34:36 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:16:40.104 18:34:36 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:16:40.104 18:34:36 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:16:40.104 18:34:36 ftl -- ftl/ftl.sh@63 -- # break 00:16:40.104 18:34:36 ftl -- ftl/ftl.sh@66 -- # killprocess 88179 00:16:40.104 18:34:36 ftl -- common/autotest_common.sh@946 -- # '[' -z 88179 ']' 00:16:40.104 18:34:36 ftl -- common/autotest_common.sh@950 -- # kill -0 88179 00:16:40.104 18:34:36 ftl -- common/autotest_common.sh@951 -- # uname 00:16:40.104 18:34:36 ftl -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:40.104 18:34:36 ftl -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 88179 00:16:40.104 18:34:36 ftl -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:40.104 18:34:36 ftl -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:40.104 killing process with pid 88179 00:16:40.104 18:34:36 ftl -- common/autotest_common.sh@964 -- # echo 'killing process with pid 88179' 00:16:40.104 18:34:36 ftl -- common/autotest_common.sh@965 -- # kill 88179 00:16:40.104 18:34:36 ftl -- common/autotest_common.sh@970 -- # wait 88179 00:16:40.104 18:34:37 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:16:40.104 18:34:37 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:16:40.104 18:34:37 ftl -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:16:40.104 18:34:37 ftl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:40.104 18:34:37 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:40.104 ************************************ 00:16:40.104 START TEST ftl_fio_basic 00:16:40.104 ************************************ 00:16:40.104 18:34:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:16:40.104 * Looking for test storage... 00:16:40.104 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:40.104 18:34:37 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:40.104 18:34:37 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:16:40.104 18:34:37 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:40.104 18:34:37 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:40.104 18:34:37 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:40.104 18:34:37 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:40.104 18:34:37 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:40.104 18:34:37 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:40.104 18:34:37 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:40.104 18:34:37 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:40.104 18:34:37 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:40.104 18:34:37 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:40.104 18:34:37 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:40.104 18:34:37 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:40.104 18:34:37 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:40.104 18:34:37 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:40.104 18:34:37 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:40.104 18:34:37 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:40.104 18:34:37 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:40.104 18:34:37 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:40.104 18:34:37 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:40.104 18:34:37 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:40.104 18:34:37 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:40.104 18:34:37 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:40.104 18:34:37 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:40.104 18:34:37 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:40.104 18:34:37 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:40.104 18:34:37 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:40.104 18:34:37 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:40.104 18:34:37 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:16:40.104 18:34:37 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:16:40.104 18:34:37 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:16:40.104 18:34:37 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:16:40.104 18:34:37 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:40.104 18:34:37 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:16:40.104 18:34:37 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:16:40.104 18:34:37 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:16:40.104 18:34:37 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:16:40.104 18:34:37 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:16:40.104 18:34:37 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:16:40.104 18:34:37 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:16:40.104 18:34:37 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:16:40.104 18:34:37 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:16:40.104 18:34:37 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:40.104 18:34:37 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:40.104 18:34:37 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:16:40.104 18:34:37 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=88292 00:16:40.104 18:34:37 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 88292 00:16:40.104 18:34:37 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:16:40.104 18:34:37 ftl.ftl_fio_basic -- common/autotest_common.sh@827 -- # '[' -z 88292 ']' 00:16:40.104 18:34:37 ftl.ftl_fio_basic -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:40.104 18:34:37 ftl.ftl_fio_basic -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:40.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:40.104 18:34:37 ftl.ftl_fio_basic -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:40.104 18:34:37 ftl.ftl_fio_basic -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:40.104 18:34:37 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:40.104 [2024-07-23 18:34:37.588316] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:16:40.104 [2024-07-23 18:34:37.588454] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88292 ] 00:16:40.104 [2024-07-23 18:34:37.736938] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:40.104 [2024-07-23 18:34:37.785359] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:40.104 [2024-07-23 18:34:37.785440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:40.104 [2024-07-23 18:34:37.785527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:40.104 18:34:38 ftl.ftl_fio_basic -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:40.104 18:34:38 ftl.ftl_fio_basic -- common/autotest_common.sh@860 -- # return 0 00:16:40.104 18:34:38 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:40.104 18:34:38 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:16:40.104 18:34:38 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:40.104 18:34:38 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:16:40.104 18:34:38 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:16:40.104 18:34:38 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:40.104 18:34:38 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:40.104 18:34:38 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:16:40.104 18:34:38 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:40.104 18:34:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1374 -- # local bdev_name=nvme0n1 00:16:40.104 18:34:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1375 -- # local bdev_info 00:16:40.104 18:34:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1376 -- # local bs 00:16:40.104 18:34:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1377 -- # local nb 00:16:40.104 18:34:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:40.104 18:34:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:16:40.104 { 00:16:40.104 "name": "nvme0n1", 00:16:40.104 "aliases": [ 00:16:40.104 "75fddabd-a9b1-4145-9549-4c90a8986083" 00:16:40.104 ], 00:16:40.104 "product_name": "NVMe disk", 00:16:40.104 "block_size": 4096, 00:16:40.104 "num_blocks": 1310720, 00:16:40.104 "uuid": "75fddabd-a9b1-4145-9549-4c90a8986083", 00:16:40.105 "assigned_rate_limits": { 00:16:40.105 "rw_ios_per_sec": 0, 00:16:40.105 "rw_mbytes_per_sec": 0, 00:16:40.105 "r_mbytes_per_sec": 0, 00:16:40.105 "w_mbytes_per_sec": 0 00:16:40.105 }, 00:16:40.105 "claimed": false, 00:16:40.105 "zoned": false, 00:16:40.105 "supported_io_types": { 00:16:40.105 "read": true, 00:16:40.105 "write": true, 00:16:40.105 "unmap": true, 00:16:40.105 "write_zeroes": true, 00:16:40.105 "flush": true, 00:16:40.105 "reset": true, 00:16:40.105 "compare": true, 00:16:40.105 "compare_and_write": false, 00:16:40.105 "abort": true, 00:16:40.105 "nvme_admin": true, 00:16:40.105 "nvme_io": true 00:16:40.105 }, 00:16:40.105 "driver_specific": { 00:16:40.105 "nvme": [ 00:16:40.105 { 00:16:40.105 "pci_address": "0000:00:11.0", 00:16:40.105 "trid": { 00:16:40.105 "trtype": "PCIe", 00:16:40.105 "traddr": "0000:00:11.0" 00:16:40.105 }, 00:16:40.105 "ctrlr_data": { 00:16:40.105 "cntlid": 0, 00:16:40.105 "vendor_id": "0x1b36", 00:16:40.105 "model_number": "QEMU NVMe Ctrl", 00:16:40.105 "serial_number": "12341", 00:16:40.105 "firmware_revision": "8.0.0", 00:16:40.105 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:40.105 "oacs": { 00:16:40.105 "security": 0, 00:16:40.105 "format": 1, 00:16:40.105 "firmware": 0, 00:16:40.105 "ns_manage": 1 00:16:40.105 }, 00:16:40.105 "multi_ctrlr": false, 00:16:40.105 "ana_reporting": false 00:16:40.105 }, 00:16:40.105 "vs": { 00:16:40.105 "nvme_version": "1.4" 00:16:40.105 }, 00:16:40.105 "ns_data": { 00:16:40.105 "id": 1, 00:16:40.105 "can_share": false 00:16:40.105 } 00:16:40.105 } 00:16:40.105 ], 00:16:40.105 "mp_policy": "active_passive" 00:16:40.105 } 00:16:40.105 } 00:16:40.105 ]' 00:16:40.105 18:34:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:16:40.105 18:34:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # bs=4096 00:16:40.105 18:34:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:16:40.105 18:34:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # nb=1310720 00:16:40.105 18:34:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bdev_size=5120 00:16:40.105 18:34:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # echo 5120 00:16:40.105 18:34:38 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:16:40.105 18:34:38 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:40.105 18:34:38 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:16:40.105 18:34:38 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:40.105 18:34:38 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:40.105 18:34:39 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:16:40.105 18:34:39 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:40.105 18:34:39 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=af43614e-e601-48ad-97ce-fc8ba3a2a3da 00:16:40.105 18:34:39 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u af43614e-e601-48ad-97ce-fc8ba3a2a3da 00:16:40.105 18:34:39 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=b02ca423-7c2c-4da8-a904-f93e67385e9e 00:16:40.105 18:34:39 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 b02ca423-7c2c-4da8-a904-f93e67385e9e 00:16:40.105 18:34:39 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:16:40.105 18:34:39 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:40.105 18:34:39 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=b02ca423-7c2c-4da8-a904-f93e67385e9e 00:16:40.105 18:34:39 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:16:40.105 18:34:39 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size b02ca423-7c2c-4da8-a904-f93e67385e9e 00:16:40.105 18:34:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1374 -- # local bdev_name=b02ca423-7c2c-4da8-a904-f93e67385e9e 00:16:40.105 18:34:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1375 -- # local bdev_info 00:16:40.105 18:34:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1376 -- # local bs 00:16:40.105 18:34:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1377 -- # local nb 00:16:40.105 18:34:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b02ca423-7c2c-4da8-a904-f93e67385e9e 00:16:40.105 18:34:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:16:40.105 { 00:16:40.105 "name": "b02ca423-7c2c-4da8-a904-f93e67385e9e", 00:16:40.105 "aliases": [ 00:16:40.105 "lvs/nvme0n1p0" 00:16:40.105 ], 00:16:40.105 "product_name": "Logical Volume", 00:16:40.105 "block_size": 4096, 00:16:40.105 "num_blocks": 26476544, 00:16:40.105 "uuid": "b02ca423-7c2c-4da8-a904-f93e67385e9e", 00:16:40.105 "assigned_rate_limits": { 00:16:40.105 "rw_ios_per_sec": 0, 00:16:40.105 "rw_mbytes_per_sec": 0, 00:16:40.105 "r_mbytes_per_sec": 0, 00:16:40.105 "w_mbytes_per_sec": 0 00:16:40.105 }, 00:16:40.105 "claimed": false, 00:16:40.105 "zoned": false, 00:16:40.105 "supported_io_types": { 00:16:40.105 "read": true, 00:16:40.105 "write": true, 00:16:40.105 "unmap": true, 00:16:40.105 "write_zeroes": true, 00:16:40.105 "flush": false, 00:16:40.105 "reset": true, 00:16:40.105 "compare": false, 00:16:40.105 "compare_and_write": false, 00:16:40.105 "abort": false, 00:16:40.105 "nvme_admin": false, 00:16:40.105 "nvme_io": false 00:16:40.105 }, 00:16:40.105 "driver_specific": { 00:16:40.105 "lvol": { 00:16:40.105 "lvol_store_uuid": "af43614e-e601-48ad-97ce-fc8ba3a2a3da", 00:16:40.105 "base_bdev": "nvme0n1", 00:16:40.105 "thin_provision": true, 00:16:40.105 "num_allocated_clusters": 0, 00:16:40.105 "snapshot": false, 00:16:40.105 "clone": false, 00:16:40.105 "esnap_clone": false 00:16:40.105 } 00:16:40.105 } 00:16:40.105 } 00:16:40.105 ]' 00:16:40.105 18:34:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:16:40.105 18:34:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # bs=4096 00:16:40.105 18:34:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:16:40.105 18:34:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # nb=26476544 00:16:40.105 18:34:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:16:40.105 18:34:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # echo 103424 00:16:40.105 18:34:39 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:16:40.105 18:34:39 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:16:40.105 18:34:39 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:40.105 18:34:39 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:40.105 18:34:39 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:40.105 18:34:39 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size b02ca423-7c2c-4da8-a904-f93e67385e9e 00:16:40.105 18:34:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1374 -- # local bdev_name=b02ca423-7c2c-4da8-a904-f93e67385e9e 00:16:40.105 18:34:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1375 -- # local bdev_info 00:16:40.105 18:34:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1376 -- # local bs 00:16:40.105 18:34:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1377 -- # local nb 00:16:40.105 18:34:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b02ca423-7c2c-4da8-a904-f93e67385e9e 00:16:40.105 18:34:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:16:40.105 { 00:16:40.105 "name": "b02ca423-7c2c-4da8-a904-f93e67385e9e", 00:16:40.105 "aliases": [ 00:16:40.105 "lvs/nvme0n1p0" 00:16:40.105 ], 00:16:40.105 "product_name": "Logical Volume", 00:16:40.105 "block_size": 4096, 00:16:40.105 "num_blocks": 26476544, 00:16:40.105 "uuid": "b02ca423-7c2c-4da8-a904-f93e67385e9e", 00:16:40.105 "assigned_rate_limits": { 00:16:40.105 "rw_ios_per_sec": 0, 00:16:40.105 "rw_mbytes_per_sec": 0, 00:16:40.105 "r_mbytes_per_sec": 0, 00:16:40.105 "w_mbytes_per_sec": 0 00:16:40.105 }, 00:16:40.105 "claimed": false, 00:16:40.105 "zoned": false, 00:16:40.105 "supported_io_types": { 00:16:40.105 "read": true, 00:16:40.105 "write": true, 00:16:40.105 "unmap": true, 00:16:40.105 "write_zeroes": true, 00:16:40.105 "flush": false, 00:16:40.105 "reset": true, 00:16:40.105 "compare": false, 00:16:40.105 "compare_and_write": false, 00:16:40.105 "abort": false, 00:16:40.105 "nvme_admin": false, 00:16:40.105 "nvme_io": false 00:16:40.105 }, 00:16:40.105 "driver_specific": { 00:16:40.105 "lvol": { 00:16:40.105 "lvol_store_uuid": "af43614e-e601-48ad-97ce-fc8ba3a2a3da", 00:16:40.105 "base_bdev": "nvme0n1", 00:16:40.105 "thin_provision": true, 00:16:40.105 "num_allocated_clusters": 0, 00:16:40.105 "snapshot": false, 00:16:40.105 "clone": false, 00:16:40.105 "esnap_clone": false 00:16:40.105 } 00:16:40.105 } 00:16:40.105 } 00:16:40.105 ]' 00:16:40.105 18:34:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:16:40.105 18:34:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # bs=4096 00:16:40.105 18:34:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:16:40.105 18:34:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # nb=26476544 00:16:40.105 18:34:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:16:40.393 18:34:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # echo 103424 00:16:40.393 18:34:40 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:16:40.393 18:34:40 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:40.393 18:34:40 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:16:40.393 18:34:40 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:16:40.393 18:34:40 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:16:40.393 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:16:40.393 18:34:40 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size b02ca423-7c2c-4da8-a904-f93e67385e9e 00:16:40.393 18:34:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1374 -- # local bdev_name=b02ca423-7c2c-4da8-a904-f93e67385e9e 00:16:40.393 18:34:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1375 -- # local bdev_info 00:16:40.393 18:34:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1376 -- # local bs 00:16:40.393 18:34:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1377 -- # local nb 00:16:40.393 18:34:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b02ca423-7c2c-4da8-a904-f93e67385e9e 00:16:40.667 18:34:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:16:40.667 { 00:16:40.667 "name": "b02ca423-7c2c-4da8-a904-f93e67385e9e", 00:16:40.667 "aliases": [ 00:16:40.667 "lvs/nvme0n1p0" 00:16:40.667 ], 00:16:40.667 "product_name": "Logical Volume", 00:16:40.667 "block_size": 4096, 00:16:40.667 "num_blocks": 26476544, 00:16:40.667 "uuid": "b02ca423-7c2c-4da8-a904-f93e67385e9e", 00:16:40.667 "assigned_rate_limits": { 00:16:40.667 "rw_ios_per_sec": 0, 00:16:40.667 "rw_mbytes_per_sec": 0, 00:16:40.667 "r_mbytes_per_sec": 0, 00:16:40.667 "w_mbytes_per_sec": 0 00:16:40.667 }, 00:16:40.667 "claimed": false, 00:16:40.667 "zoned": false, 00:16:40.667 "supported_io_types": { 00:16:40.667 "read": true, 00:16:40.667 "write": true, 00:16:40.667 "unmap": true, 00:16:40.667 "write_zeroes": true, 00:16:40.667 "flush": false, 00:16:40.667 "reset": true, 00:16:40.667 "compare": false, 00:16:40.667 "compare_and_write": false, 00:16:40.667 "abort": false, 00:16:40.667 "nvme_admin": false, 00:16:40.667 "nvme_io": false 00:16:40.667 }, 00:16:40.667 "driver_specific": { 00:16:40.667 "lvol": { 00:16:40.667 "lvol_store_uuid": "af43614e-e601-48ad-97ce-fc8ba3a2a3da", 00:16:40.667 "base_bdev": "nvme0n1", 00:16:40.667 "thin_provision": true, 00:16:40.667 "num_allocated_clusters": 0, 00:16:40.667 "snapshot": false, 00:16:40.667 "clone": false, 00:16:40.667 "esnap_clone": false 00:16:40.667 } 00:16:40.667 } 00:16:40.667 } 00:16:40.667 ]' 00:16:40.667 18:34:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:16:40.667 18:34:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # bs=4096 00:16:40.668 18:34:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:16:40.668 18:34:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # nb=26476544 00:16:40.668 18:34:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:16:40.668 18:34:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # echo 103424 00:16:40.668 18:34:40 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:16:40.668 18:34:40 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:16:40.668 18:34:40 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d b02ca423-7c2c-4da8-a904-f93e67385e9e -c nvc0n1p0 --l2p_dram_limit 60 00:16:40.927 [2024-07-23 18:34:40.732042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.927 [2024-07-23 18:34:40.732145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:40.927 [2024-07-23 18:34:40.732185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:40.927 [2024-07-23 18:34:40.732207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.927 [2024-07-23 18:34:40.732380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.927 [2024-07-23 18:34:40.732424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:40.927 [2024-07-23 18:34:40.732470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:16:40.927 [2024-07-23 18:34:40.732500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.927 [2024-07-23 18:34:40.732549] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:40.927 [2024-07-23 18:34:40.732936] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:40.927 [2024-07-23 18:34:40.733016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.927 [2024-07-23 18:34:40.733046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:40.927 [2024-07-23 18:34:40.733072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.478 ms 00:16:40.927 [2024-07-23 18:34:40.733114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.927 [2024-07-23 18:34:40.733223] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 81c7499f-d55c-4469-8564-5fe2ffaf2686 00:16:40.927 [2024-07-23 18:34:40.735729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.927 [2024-07-23 18:34:40.735792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:40.927 [2024-07-23 18:34:40.735823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:16:40.927 [2024-07-23 18:34:40.735845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.928 [2024-07-23 18:34:40.749768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.928 [2024-07-23 18:34:40.749848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:40.928 [2024-07-23 18:34:40.749905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.854 ms 00:16:40.928 [2024-07-23 18:34:40.749935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.928 [2024-07-23 18:34:40.750071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.928 [2024-07-23 18:34:40.750129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:40.928 [2024-07-23 18:34:40.750174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:16:40.928 [2024-07-23 18:34:40.750198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.928 [2024-07-23 18:34:40.750324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.928 [2024-07-23 18:34:40.750367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:40.928 [2024-07-23 18:34:40.750397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:16:40.928 [2024-07-23 18:34:40.750426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.928 [2024-07-23 18:34:40.750494] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:40.928 [2024-07-23 18:34:40.753290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.928 [2024-07-23 18:34:40.753344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:40.928 [2024-07-23 18:34:40.753376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.807 ms 00:16:40.928 [2024-07-23 18:34:40.753396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.928 [2024-07-23 18:34:40.753480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.928 [2024-07-23 18:34:40.753513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:40.928 [2024-07-23 18:34:40.753543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:40.928 [2024-07-23 18:34:40.753588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.928 [2024-07-23 18:34:40.753669] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:40.928 [2024-07-23 18:34:40.753824] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:40.928 [2024-07-23 18:34:40.753870] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:40.928 [2024-07-23 18:34:40.753916] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:16:40.928 [2024-07-23 18:34:40.753991] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:40.928 [2024-07-23 18:34:40.754032] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:40.928 [2024-07-23 18:34:40.754078] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:16:40.928 [2024-07-23 18:34:40.754103] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:40.928 [2024-07-23 18:34:40.754132] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:40.928 [2024-07-23 18:34:40.754153] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:40.928 [2024-07-23 18:34:40.754185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.928 [2024-07-23 18:34:40.754206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:40.928 [2024-07-23 18:34:40.754240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.523 ms 00:16:40.928 [2024-07-23 18:34:40.754269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.928 [2024-07-23 18:34:40.754383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.928 [2024-07-23 18:34:40.754423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:40.928 [2024-07-23 18:34:40.754460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:16:40.928 [2024-07-23 18:34:40.754486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.928 [2024-07-23 18:34:40.754624] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:40.928 [2024-07-23 18:34:40.754660] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:40.928 [2024-07-23 18:34:40.754684] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:40.928 [2024-07-23 18:34:40.754714] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:40.928 [2024-07-23 18:34:40.754738] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:40.928 [2024-07-23 18:34:40.754758] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:40.928 [2024-07-23 18:34:40.754789] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:16:40.928 [2024-07-23 18:34:40.754798] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:40.928 [2024-07-23 18:34:40.754809] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:16:40.928 [2024-07-23 18:34:40.754816] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:40.928 [2024-07-23 18:34:40.754825] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:40.928 [2024-07-23 18:34:40.754832] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:16:40.928 [2024-07-23 18:34:40.754840] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:40.928 [2024-07-23 18:34:40.754847] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:40.928 [2024-07-23 18:34:40.754859] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:16:40.928 [2024-07-23 18:34:40.754865] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:40.928 [2024-07-23 18:34:40.754873] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:40.928 [2024-07-23 18:34:40.754880] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:16:40.928 [2024-07-23 18:34:40.754888] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:40.928 [2024-07-23 18:34:40.754895] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:40.928 [2024-07-23 18:34:40.754904] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:16:40.928 [2024-07-23 18:34:40.754910] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:40.928 [2024-07-23 18:34:40.754919] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:40.928 [2024-07-23 18:34:40.754925] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:16:40.928 [2024-07-23 18:34:40.754934] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:40.928 [2024-07-23 18:34:40.754944] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:40.928 [2024-07-23 18:34:40.754952] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:16:40.928 [2024-07-23 18:34:40.754958] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:40.928 [2024-07-23 18:34:40.754967] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:40.928 [2024-07-23 18:34:40.754974] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:16:40.928 [2024-07-23 18:34:40.754985] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:40.928 [2024-07-23 18:34:40.754992] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:40.928 [2024-07-23 18:34:40.755001] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:16:40.928 [2024-07-23 18:34:40.755007] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:40.928 [2024-07-23 18:34:40.755017] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:40.928 [2024-07-23 18:34:40.755024] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:16:40.928 [2024-07-23 18:34:40.755033] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:40.928 [2024-07-23 18:34:40.755039] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:40.928 [2024-07-23 18:34:40.755047] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:16:40.928 [2024-07-23 18:34:40.755053] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:40.928 [2024-07-23 18:34:40.755061] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:40.928 [2024-07-23 18:34:40.755073] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:16:40.928 [2024-07-23 18:34:40.755083] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:40.928 [2024-07-23 18:34:40.755089] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:40.928 [2024-07-23 18:34:40.755099] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:40.928 [2024-07-23 18:34:40.755107] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:40.928 [2024-07-23 18:34:40.755119] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:40.928 [2024-07-23 18:34:40.755146] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:40.928 [2024-07-23 18:34:40.755157] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:40.928 [2024-07-23 18:34:40.755163] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:40.929 [2024-07-23 18:34:40.755172] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:40.929 [2024-07-23 18:34:40.755179] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:40.929 [2024-07-23 18:34:40.755188] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:40.929 [2024-07-23 18:34:40.755200] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:40.929 [2024-07-23 18:34:40.755212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:40.929 [2024-07-23 18:34:40.755222] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:16:40.929 [2024-07-23 18:34:40.755233] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:16:40.929 [2024-07-23 18:34:40.755242] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:16:40.929 [2024-07-23 18:34:40.755252] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:16:40.929 [2024-07-23 18:34:40.755258] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:16:40.929 [2024-07-23 18:34:40.755268] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:16:40.929 [2024-07-23 18:34:40.755275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:16:40.929 [2024-07-23 18:34:40.755288] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:16:40.929 [2024-07-23 18:34:40.755295] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:16:40.929 [2024-07-23 18:34:40.755305] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:16:40.929 [2024-07-23 18:34:40.755312] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:16:40.929 [2024-07-23 18:34:40.755321] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:16:40.929 [2024-07-23 18:34:40.755328] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:16:40.929 [2024-07-23 18:34:40.755337] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:16:40.929 [2024-07-23 18:34:40.755344] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:40.929 [2024-07-23 18:34:40.755354] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:40.929 [2024-07-23 18:34:40.755363] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:40.929 [2024-07-23 18:34:40.755372] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:40.929 [2024-07-23 18:34:40.755379] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:40.929 [2024-07-23 18:34:40.755387] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:40.929 [2024-07-23 18:34:40.755395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.929 [2024-07-23 18:34:40.755405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:40.929 [2024-07-23 18:34:40.755413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.836 ms 00:16:40.929 [2024-07-23 18:34:40.755437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.929 [2024-07-23 18:34:40.755525] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:40.929 [2024-07-23 18:34:40.755537] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:16:43.462 [2024-07-23 18:34:42.972491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.462 [2024-07-23 18:34:42.972651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:43.462 [2024-07-23 18:34:42.972696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2221.234 ms 00:16:43.462 [2024-07-23 18:34:42.972732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.463 [2024-07-23 18:34:42.984287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.463 [2024-07-23 18:34:42.984422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:43.463 [2024-07-23 18:34:42.984481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.396 ms 00:16:43.463 [2024-07-23 18:34:42.984520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.463 [2024-07-23 18:34:42.984835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.463 [2024-07-23 18:34:42.984902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:43.463 [2024-07-23 18:34:42.984951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:16:43.463 [2024-07-23 18:34:42.985004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.463 [2024-07-23 18:34:43.005040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.463 [2024-07-23 18:34:43.005172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:43.463 [2024-07-23 18:34:43.005225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.938 ms 00:16:43.463 [2024-07-23 18:34:43.005261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.463 [2024-07-23 18:34:43.005384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.463 [2024-07-23 18:34:43.005479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:43.463 [2024-07-23 18:34:43.005536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:43.463 [2024-07-23 18:34:43.005619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.463 [2024-07-23 18:34:43.006260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.463 [2024-07-23 18:34:43.006290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:43.463 [2024-07-23 18:34:43.006305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.454 ms 00:16:43.463 [2024-07-23 18:34:43.006319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.463 [2024-07-23 18:34:43.006519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.463 [2024-07-23 18:34:43.006557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:43.463 [2024-07-23 18:34:43.006595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:16:43.463 [2024-07-23 18:34:43.006611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.463 [2024-07-23 18:34:43.014482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.463 [2024-07-23 18:34:43.014533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:43.463 [2024-07-23 18:34:43.014550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.819 ms 00:16:43.463 [2024-07-23 18:34:43.014565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.463 [2024-07-23 18:34:43.022135] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:43.463 [2024-07-23 18:34:43.038744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.463 [2024-07-23 18:34:43.038804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:43.463 [2024-07-23 18:34:43.038822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.001 ms 00:16:43.463 [2024-07-23 18:34:43.038831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.463 [2024-07-23 18:34:43.087581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.463 [2024-07-23 18:34:43.087625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:43.463 [2024-07-23 18:34:43.087640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.724 ms 00:16:43.463 [2024-07-23 18:34:43.087665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.463 [2024-07-23 18:34:43.087900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.463 [2024-07-23 18:34:43.087925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:43.463 [2024-07-23 18:34:43.087943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.143 ms 00:16:43.463 [2024-07-23 18:34:43.087957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.463 [2024-07-23 18:34:43.091420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.463 [2024-07-23 18:34:43.091457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:43.463 [2024-07-23 18:34:43.091476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.378 ms 00:16:43.463 [2024-07-23 18:34:43.091499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.463 [2024-07-23 18:34:43.094326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.463 [2024-07-23 18:34:43.094358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:43.463 [2024-07-23 18:34:43.094372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.737 ms 00:16:43.463 [2024-07-23 18:34:43.094398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.463 [2024-07-23 18:34:43.094807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.463 [2024-07-23 18:34:43.094872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:43.463 [2024-07-23 18:34:43.094928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.346 ms 00:16:43.463 [2024-07-23 18:34:43.094980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.463 [2024-07-23 18:34:43.131281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.463 [2024-07-23 18:34:43.131370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:43.463 [2024-07-23 18:34:43.131435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.258 ms 00:16:43.463 [2024-07-23 18:34:43.131469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.463 [2024-07-23 18:34:43.136063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.463 [2024-07-23 18:34:43.136099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:43.463 [2024-07-23 18:34:43.136118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.467 ms 00:16:43.463 [2024-07-23 18:34:43.136129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.463 [2024-07-23 18:34:43.139453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.463 [2024-07-23 18:34:43.139488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:16:43.463 [2024-07-23 18:34:43.139503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.261 ms 00:16:43.463 [2024-07-23 18:34:43.139512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.463 [2024-07-23 18:34:43.143352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.463 [2024-07-23 18:34:43.143388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:43.463 [2024-07-23 18:34:43.143403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.766 ms 00:16:43.463 [2024-07-23 18:34:43.143412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.463 [2024-07-23 18:34:43.143525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.463 [2024-07-23 18:34:43.143538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:43.463 [2024-07-23 18:34:43.143551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:16:43.463 [2024-07-23 18:34:43.143560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.463 [2024-07-23 18:34:43.143732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.463 [2024-07-23 18:34:43.143744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:43.463 [2024-07-23 18:34:43.143776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:16:43.463 [2024-07-23 18:34:43.143786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.463 [2024-07-23 18:34:43.145375] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2417.346 ms, result 0 00:16:43.463 { 00:16:43.463 "name": "ftl0", 00:16:43.463 "uuid": "81c7499f-d55c-4469-8564-5fe2ffaf2686" 00:16:43.463 } 00:16:43.463 18:34:43 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:16:43.463 18:34:43 ftl.ftl_fio_basic -- common/autotest_common.sh@895 -- # local bdev_name=ftl0 00:16:43.463 18:34:43 ftl.ftl_fio_basic -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:43.463 18:34:43 ftl.ftl_fio_basic -- common/autotest_common.sh@897 -- # local i 00:16:43.463 18:34:43 ftl.ftl_fio_basic -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:43.463 18:34:43 ftl.ftl_fio_basic -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:43.463 18:34:43 ftl.ftl_fio_basic -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:43.463 18:34:43 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:16:43.721 [ 00:16:43.721 { 00:16:43.721 "name": "ftl0", 00:16:43.721 "aliases": [ 00:16:43.721 "81c7499f-d55c-4469-8564-5fe2ffaf2686" 00:16:43.721 ], 00:16:43.721 "product_name": "FTL disk", 00:16:43.721 "block_size": 4096, 00:16:43.721 "num_blocks": 20971520, 00:16:43.721 "uuid": "81c7499f-d55c-4469-8564-5fe2ffaf2686", 00:16:43.721 "assigned_rate_limits": { 00:16:43.721 "rw_ios_per_sec": 0, 00:16:43.721 "rw_mbytes_per_sec": 0, 00:16:43.721 "r_mbytes_per_sec": 0, 00:16:43.721 "w_mbytes_per_sec": 0 00:16:43.721 }, 00:16:43.721 "claimed": false, 00:16:43.721 "zoned": false, 00:16:43.721 "supported_io_types": { 00:16:43.721 "read": true, 00:16:43.721 "write": true, 00:16:43.721 "unmap": true, 00:16:43.721 "write_zeroes": true, 00:16:43.721 "flush": true, 00:16:43.721 "reset": false, 00:16:43.721 "compare": false, 00:16:43.721 "compare_and_write": false, 00:16:43.721 "abort": false, 00:16:43.721 "nvme_admin": false, 00:16:43.721 "nvme_io": false 00:16:43.721 }, 00:16:43.721 "driver_specific": { 00:16:43.721 "ftl": { 00:16:43.721 "base_bdev": "b02ca423-7c2c-4da8-a904-f93e67385e9e", 00:16:43.721 "cache": "nvc0n1p0" 00:16:43.721 } 00:16:43.721 } 00:16:43.721 } 00:16:43.721 ] 00:16:43.721 18:34:43 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # return 0 00:16:43.721 18:34:43 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:16:43.721 18:34:43 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:16:43.721 18:34:43 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:16:43.721 18:34:43 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:16:43.981 [2024-07-23 18:34:43.846402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.981 [2024-07-23 18:34:43.846466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:43.981 [2024-07-23 18:34:43.846482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:43.981 [2024-07-23 18:34:43.846493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.981 [2024-07-23 18:34:43.846565] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:43.981 [2024-07-23 18:34:43.847361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.981 [2024-07-23 18:34:43.847388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:43.981 [2024-07-23 18:34:43.847405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.743 ms 00:16:43.981 [2024-07-23 18:34:43.847417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.981 [2024-07-23 18:34:43.848206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.981 [2024-07-23 18:34:43.848238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:43.981 [2024-07-23 18:34:43.848251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.738 ms 00:16:43.981 [2024-07-23 18:34:43.848261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.981 [2024-07-23 18:34:43.850820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.981 [2024-07-23 18:34:43.850845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:43.981 [2024-07-23 18:34:43.850859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.478 ms 00:16:43.981 [2024-07-23 18:34:43.850884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.981 [2024-07-23 18:34:43.855896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.981 [2024-07-23 18:34:43.855946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:43.981 [2024-07-23 18:34:43.855977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.965 ms 00:16:43.981 [2024-07-23 18:34:43.855996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.981 [2024-07-23 18:34:43.857826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.981 [2024-07-23 18:34:43.857864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:43.981 [2024-07-23 18:34:43.857882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.691 ms 00:16:43.981 [2024-07-23 18:34:43.857891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.981 [2024-07-23 18:34:43.863602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.981 [2024-07-23 18:34:43.863641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:43.981 [2024-07-23 18:34:43.863657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.663 ms 00:16:43.981 [2024-07-23 18:34:43.863668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.981 [2024-07-23 18:34:43.863939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.981 [2024-07-23 18:34:43.863950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:43.981 [2024-07-23 18:34:43.863983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.204 ms 00:16:43.981 [2024-07-23 18:34:43.863992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.981 [2024-07-23 18:34:43.866142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.981 [2024-07-23 18:34:43.866176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:43.981 [2024-07-23 18:34:43.866190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.086 ms 00:16:43.981 [2024-07-23 18:34:43.866198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.981 [2024-07-23 18:34:43.867899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.981 [2024-07-23 18:34:43.867940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:43.981 [2024-07-23 18:34:43.867957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.632 ms 00:16:43.981 [2024-07-23 18:34:43.867966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.981 [2024-07-23 18:34:43.869352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.981 [2024-07-23 18:34:43.869388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:43.981 [2024-07-23 18:34:43.869403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.313 ms 00:16:43.981 [2024-07-23 18:34:43.869411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.981 [2024-07-23 18:34:43.870726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.981 [2024-07-23 18:34:43.870760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:43.981 [2024-07-23 18:34:43.870773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.153 ms 00:16:43.981 [2024-07-23 18:34:43.870782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.981 [2024-07-23 18:34:43.870847] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:43.981 [2024-07-23 18:34:43.870863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:43.981 [2024-07-23 18:34:43.870876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:43.981 [2024-07-23 18:34:43.870886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:43.981 [2024-07-23 18:34:43.870898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:43.981 [2024-07-23 18:34:43.870907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:43.981 [2024-07-23 18:34:43.870921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:43.981 [2024-07-23 18:34:43.870930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:43.981 [2024-07-23 18:34:43.870941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:43.981 [2024-07-23 18:34:43.870950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:43.981 [2024-07-23 18:34:43.870962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:43.981 [2024-07-23 18:34:43.870971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:43.981 [2024-07-23 18:34:43.870982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:43.981 [2024-07-23 18:34:43.870992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:43.981 [2024-07-23 18:34:43.871003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:43.981 [2024-07-23 18:34:43.871012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:43.981 [2024-07-23 18:34:43.871023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:43.981 [2024-07-23 18:34:43.871032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:43.981 [2024-07-23 18:34:43.871043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:43.981 [2024-07-23 18:34:43.871052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:43.981 [2024-07-23 18:34:43.871063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:43.981 [2024-07-23 18:34:43.871079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:43.981 [2024-07-23 18:34:43.871110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:43.981 [2024-07-23 18:34:43.871121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:43.981 [2024-07-23 18:34:43.871132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:43.981 [2024-07-23 18:34:43.871142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:43.981 [2024-07-23 18:34:43.871154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:43.981 [2024-07-23 18:34:43.871164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:43.981 [2024-07-23 18:34:43.871177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:43.981 [2024-07-23 18:34:43.871187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:43.981 [2024-07-23 18:34:43.871222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:43.982 [2024-07-23 18:34:43.871232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:43.982 [2024-07-23 18:34:43.871244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:43.982 [2024-07-23 18:34:43.871257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:43.982 [2024-07-23 18:34:43.871269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:43.982 [2024-07-23 18:34:43.871279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:43.982 [2024-07-23 18:34:43.871291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:43.982 [2024-07-23 18:34:43.871301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:43.982 [2024-07-23 18:34:43.871315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:43.982 [2024-07-23 18:34:43.871325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:43.982 [2024-07-23 18:34:43.871336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:43.982 [2024-07-23 18:34:43.871347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:43.982 [2024-07-23 18:34:43.871358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:43.982 [2024-07-23 18:34:43.871367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:43.982 [2024-07-23 18:34:43.871379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:43.982 [2024-07-23 18:34:43.871389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:43.982 [2024-07-23 18:34:43.871402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:43.982 [2024-07-23 18:34:43.871415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:43.982 [2024-07-23 18:34:43.871428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:43.982 [2024-07-23 18:34:43.871437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:43.982 [2024-07-23 18:34:43.871449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:43.982 [2024-07-23 18:34:43.871459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:43.982 [2024-07-23 18:34:43.871471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:43.982 [2024-07-23 18:34:43.871480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:43.982 [2024-07-23 18:34:43.871494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:43.982 [2024-07-23 18:34:43.871503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:43.982 [2024-07-23 18:34:43.871516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:43.982 [2024-07-23 18:34:43.871527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:43.982 [2024-07-23 18:34:43.871539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:43.982 [2024-07-23 18:34:43.871548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:43.982 [2024-07-23 18:34:43.871560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:43.982 [2024-07-23 18:34:43.871582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:43.982 [2024-07-23 18:34:43.871595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:43.982 [2024-07-23 18:34:43.871605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:43.982 [2024-07-23 18:34:43.871617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:43.982 [2024-07-23 18:34:43.871629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:43.982 [2024-07-23 18:34:43.871641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:43.982 [2024-07-23 18:34:43.871651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:43.982 [2024-07-23 18:34:43.871663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:43.982 [2024-07-23 18:34:43.871672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:43.982 [2024-07-23 18:34:43.871686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:43.982 [2024-07-23 18:34:43.871696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:43.982 [2024-07-23 18:34:43.871709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:43.982 [2024-07-23 18:34:43.871719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:43.982 [2024-07-23 18:34:43.871731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:43.982 [2024-07-23 18:34:43.871741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:43.982 [2024-07-23 18:34:43.871752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:43.982 [2024-07-23 18:34:43.871762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:43.982 [2024-07-23 18:34:43.871774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:43.982 [2024-07-23 18:34:43.871784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:43.982 [2024-07-23 18:34:43.871795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:43.982 [2024-07-23 18:34:43.871818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:43.982 [2024-07-23 18:34:43.871830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:43.982 [2024-07-23 18:34:43.871838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:43.982 [2024-07-23 18:34:43.871850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:43.982 [2024-07-23 18:34:43.871859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:43.982 [2024-07-23 18:34:43.871871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:43.982 [2024-07-23 18:34:43.871880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:43.982 [2024-07-23 18:34:43.871892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:43.982 [2024-07-23 18:34:43.871901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:43.982 [2024-07-23 18:34:43.871912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:43.982 [2024-07-23 18:34:43.871922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:43.982 [2024-07-23 18:34:43.871933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:43.982 [2024-07-23 18:34:43.871942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:43.982 [2024-07-23 18:34:43.871952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:43.982 [2024-07-23 18:34:43.871961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:43.982 [2024-07-23 18:34:43.871972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:43.982 [2024-07-23 18:34:43.871983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:43.982 [2024-07-23 18:34:43.871994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:43.982 [2024-07-23 18:34:43.872004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:43.982 [2024-07-23 18:34:43.872016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:43.982 [2024-07-23 18:34:43.872032] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:43.982 [2024-07-23 18:34:43.872045] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 81c7499f-d55c-4469-8564-5fe2ffaf2686 00:16:43.982 [2024-07-23 18:34:43.872054] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:43.982 [2024-07-23 18:34:43.872068] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:43.982 [2024-07-23 18:34:43.872078] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:43.982 [2024-07-23 18:34:43.872090] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:43.982 [2024-07-23 18:34:43.872098] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:43.983 [2024-07-23 18:34:43.872109] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:43.983 [2024-07-23 18:34:43.872119] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:43.983 [2024-07-23 18:34:43.872129] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:43.983 [2024-07-23 18:34:43.872137] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:43.983 [2024-07-23 18:34:43.872150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.983 [2024-07-23 18:34:43.872160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:43.983 [2024-07-23 18:34:43.872173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.307 ms 00:16:43.983 [2024-07-23 18:34:43.872182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.983 [2024-07-23 18:34:43.874046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.983 [2024-07-23 18:34:43.874062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:43.983 [2024-07-23 18:34:43.874077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.813 ms 00:16:43.983 [2024-07-23 18:34:43.874087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.983 [2024-07-23 18:34:43.874250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.983 [2024-07-23 18:34:43.874261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:43.983 [2024-07-23 18:34:43.874274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:16:43.983 [2024-07-23 18:34:43.874282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.983 [2024-07-23 18:34:43.881209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.983 [2024-07-23 18:34:43.881302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:43.983 [2024-07-23 18:34:43.881342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.983 [2024-07-23 18:34:43.881388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.983 [2024-07-23 18:34:43.881518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.983 [2024-07-23 18:34:43.881588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:43.983 [2024-07-23 18:34:43.881657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.983 [2024-07-23 18:34:43.881706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.983 [2024-07-23 18:34:43.881895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.983 [2024-07-23 18:34:43.881955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:43.983 [2024-07-23 18:34:43.882011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.983 [2024-07-23 18:34:43.882061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.983 [2024-07-23 18:34:43.882170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.983 [2024-07-23 18:34:43.882219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:43.983 [2024-07-23 18:34:43.882270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.983 [2024-07-23 18:34:43.882321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.983 [2024-07-23 18:34:43.896750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.983 [2024-07-23 18:34:43.896901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:43.983 [2024-07-23 18:34:43.896958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.983 [2024-07-23 18:34:43.896984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.983 [2024-07-23 18:34:43.905592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.983 [2024-07-23 18:34:43.905724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:43.983 [2024-07-23 18:34:43.905782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.983 [2024-07-23 18:34:43.905808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.983 [2024-07-23 18:34:43.905986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.983 [2024-07-23 18:34:43.906065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:43.983 [2024-07-23 18:34:43.906121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.983 [2024-07-23 18:34:43.906170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.983 [2024-07-23 18:34:43.906348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.983 [2024-07-23 18:34:43.906401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:43.983 [2024-07-23 18:34:43.906455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.983 [2024-07-23 18:34:43.906504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.983 [2024-07-23 18:34:43.906780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.983 [2024-07-23 18:34:43.906839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:43.983 [2024-07-23 18:34:43.906923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.983 [2024-07-23 18:34:43.906971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.983 [2024-07-23 18:34:43.907143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.983 [2024-07-23 18:34:43.907202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:43.983 [2024-07-23 18:34:43.907256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.983 [2024-07-23 18:34:43.907302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.983 [2024-07-23 18:34:43.907428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.983 [2024-07-23 18:34:43.907478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:43.983 [2024-07-23 18:34:43.907563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.983 [2024-07-23 18:34:43.907626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.983 [2024-07-23 18:34:43.907803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.983 [2024-07-23 18:34:43.907879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:43.983 [2024-07-23 18:34:43.907933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.983 [2024-07-23 18:34:43.907987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.983 [2024-07-23 18:34:43.908328] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 61.974 ms, result 0 00:16:43.983 true 00:16:43.983 18:34:43 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 88292 00:16:43.983 18:34:43 ftl.ftl_fio_basic -- common/autotest_common.sh@946 -- # '[' -z 88292 ']' 00:16:43.983 18:34:43 ftl.ftl_fio_basic -- common/autotest_common.sh@950 -- # kill -0 88292 00:16:43.983 18:34:43 ftl.ftl_fio_basic -- common/autotest_common.sh@951 -- # uname 00:16:43.983 18:34:43 ftl.ftl_fio_basic -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:43.983 18:34:43 ftl.ftl_fio_basic -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 88292 00:16:43.983 18:34:43 ftl.ftl_fio_basic -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:43.983 18:34:43 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:43.983 killing process with pid 88292 00:16:43.983 18:34:43 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # echo 'killing process with pid 88292' 00:16:43.983 18:34:43 ftl.ftl_fio_basic -- common/autotest_common.sh@965 -- # kill 88292 00:16:43.983 18:34:43 ftl.ftl_fio_basic -- common/autotest_common.sh@970 -- # wait 88292 00:16:52.109 18:34:52 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:16:52.109 18:34:52 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:16:52.109 18:34:52 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:16:52.109 18:34:52 ftl.ftl_fio_basic -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:52.109 18:34:52 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:52.109 18:34:52 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:16:52.109 18:34:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:16:52.109 18:34:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:16:52.109 18:34:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:52.109 18:34:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1335 -- # local sanitizers 00:16:52.109 18:34:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:52.109 18:34:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # shift 00:16:52.109 18:34:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local asan_lib= 00:16:52.109 18:34:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:16:52.109 18:34:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # grep libasan 00:16:52.109 18:34:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:52.109 18:34:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:16:52.109 18:34:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:52.109 18:34:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1342 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:52.109 18:34:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # break 00:16:52.109 18:34:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:52.109 18:34:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:16:52.368 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:16:52.368 fio-3.35 00:16:52.368 Starting 1 thread 00:16:57.647 00:16:57.647 test: (groupid=0, jobs=1): err= 0: pid=88521: Tue Jul 23 18:34:56 2024 00:16:57.647 read: IOPS=988, BW=65.6MiB/s (68.8MB/s)(255MiB/3879msec) 00:16:57.647 slat (nsec): min=4190, max=28919, avg=6219.99, stdev=2507.71 00:16:57.647 clat (usec): min=286, max=6475, avg=461.13, stdev=144.04 00:16:57.647 lat (usec): min=291, max=6482, avg=467.35, stdev=144.21 00:16:57.647 clat percentiles (usec): 00:16:57.647 | 1.00th=[ 322], 5.00th=[ 375], 10.00th=[ 383], 20.00th=[ 433], 00:16:57.647 | 30.00th=[ 441], 40.00th=[ 445], 50.00th=[ 453], 60.00th=[ 461], 00:16:57.647 | 70.00th=[ 478], 80.00th=[ 506], 90.00th=[ 523], 95.00th=[ 529], 00:16:57.647 | 99.00th=[ 570], 99.50th=[ 635], 99.90th=[ 840], 99.95th=[ 5669], 00:16:57.647 | 99.99th=[ 6456] 00:16:57.647 write: IOPS=995, BW=66.1MiB/s (69.3MB/s)(256MiB/3875msec); 0 zone resets 00:16:57.647 slat (nsec): min=15391, max=81378, avg=20737.65, stdev=5146.84 00:16:57.647 clat (usec): min=356, max=2127, avg=508.38, stdev=69.06 00:16:57.647 lat (usec): min=381, max=2147, avg=529.12, stdev=69.46 00:16:57.647 clat percentiles (usec): 00:16:57.647 | 1.00th=[ 392], 5.00th=[ 404], 10.00th=[ 453], 20.00th=[ 461], 00:16:57.647 | 30.00th=[ 469], 40.00th=[ 486], 50.00th=[ 523], 60.00th=[ 529], 00:16:57.647 | 70.00th=[ 537], 80.00th=[ 537], 90.00th=[ 553], 95.00th=[ 594], 00:16:57.647 | 99.00th=[ 775], 99.50th=[ 832], 99.90th=[ 930], 99.95th=[ 1237], 00:16:57.647 | 99.99th=[ 2114] 00:16:57.647 bw ( KiB/s): min=65552, max=69496, per=100.00%, avg=67902.86, stdev=1558.01, samples=7 00:16:57.647 iops : min= 964, max= 1022, avg=998.57, stdev=22.91, samples=7 00:16:57.647 lat (usec) : 500=58.43%, 750=40.62%, 1000=0.88% 00:16:57.647 lat (msec) : 2=0.01%, 4=0.03%, 10=0.03% 00:16:57.647 cpu : usr=99.36%, sys=0.03%, ctx=5, majf=0, minf=1326 00:16:57.647 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:57.647 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:57.647 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:57.647 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:57.647 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:57.647 00:16:57.647 Run status group 0 (all jobs): 00:16:57.647 READ: bw=65.6MiB/s (68.8MB/s), 65.6MiB/s-65.6MiB/s (68.8MB/s-68.8MB/s), io=255MiB (267MB), run=3879-3879msec 00:16:57.647 WRITE: bw=66.1MiB/s (69.3MB/s), 66.1MiB/s-66.1MiB/s (69.3MB/s-69.3MB/s), io=256MiB (269MB), run=3875-3875msec 00:16:57.908 ----------------------------------------------------- 00:16:57.908 Suppressions used: 00:16:57.908 count bytes template 00:16:57.908 1 5 /usr/src/fio/parse.c 00:16:57.908 1 8 libtcmalloc_minimal.so 00:16:57.908 1 904 libcrypto.so 00:16:57.908 ----------------------------------------------------- 00:16:57.908 00:16:57.908 18:34:57 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:16:57.908 18:34:57 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:57.908 18:34:57 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:57.908 18:34:57 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:16:57.908 18:34:57 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:16:57.908 18:34:57 ftl.ftl_fio_basic -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:57.908 18:34:57 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:57.908 18:34:57 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:16:57.908 18:34:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:16:57.908 18:34:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:16:57.908 18:34:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:57.908 18:34:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1335 -- # local sanitizers 00:16:57.908 18:34:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:57.908 18:34:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # shift 00:16:57.908 18:34:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local asan_lib= 00:16:57.908 18:34:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:16:57.908 18:34:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:57.908 18:34:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # grep libasan 00:16:57.908 18:34:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:16:57.908 18:34:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:57.908 18:34:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1342 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:57.908 18:34:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # break 00:16:57.908 18:34:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:57.908 18:34:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:16:58.168 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:16:58.168 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:16:58.168 fio-3.35 00:16:58.168 Starting 2 threads 00:17:30.274 00:17:30.274 first_half: (groupid=0, jobs=1): err= 0: pid=88613: Tue Jul 23 18:35:26 2024 00:17:30.274 read: IOPS=2406, BW=9626KiB/s (9857kB/s)(256MiB/27210msec) 00:17:30.274 slat (nsec): min=3269, max=54085, avg=10142.25, stdev=3997.40 00:17:30.274 clat (msec): min=12, max=247, avg=44.88, stdev=27.86 00:17:30.274 lat (msec): min=12, max=247, avg=44.89, stdev=27.86 00:17:30.274 clat percentiles (msec): 00:17:30.274 | 1.00th=[ 33], 5.00th=[ 36], 10.00th=[ 36], 20.00th=[ 37], 00:17:30.274 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 37], 00:17:30.274 | 70.00th=[ 39], 80.00th=[ 44], 90.00th=[ 50], 95.00th=[ 89], 00:17:30.274 | 99.00th=[ 201], 99.50th=[ 220], 99.90th=[ 232], 99.95th=[ 239], 00:17:30.274 | 99.99th=[ 245] 00:17:30.274 write: IOPS=2420, BW=9681KiB/s (9913kB/s)(256MiB/27079msec); 0 zone resets 00:17:30.274 slat (usec): min=3, max=655, avg= 9.78, stdev= 7.21 00:17:30.274 clat (usec): min=460, max=46291, avg=8269.90, stdev=4605.22 00:17:30.274 lat (usec): min=478, max=46298, avg=8279.68, stdev=4605.38 00:17:30.274 clat percentiles (usec): 00:17:30.274 | 1.00th=[ 1385], 5.00th=[ 2376], 10.00th=[ 3064], 20.00th=[ 4686], 00:17:30.274 | 30.00th=[ 6128], 40.00th=[ 7046], 50.00th=[ 7963], 60.00th=[ 8455], 00:17:30.274 | 70.00th=[ 8979], 80.00th=[10552], 90.00th=[14615], 95.00th=[16319], 00:17:30.274 | 99.00th=[21627], 99.50th=[31065], 99.90th=[42730], 99.95th=[43779], 00:17:30.274 | 99.99th=[45876] 00:17:30.274 bw ( KiB/s): min= 48, max=43144, per=100.00%, avg=20832.96, stdev=14278.77, samples=25 00:17:30.274 iops : min= 12, max=10786, avg=5208.24, stdev=3569.69, samples=25 00:17:30.274 lat (usec) : 500=0.01%, 750=0.04%, 1000=0.12% 00:17:30.274 lat (msec) : 2=1.41%, 4=6.09%, 10=30.66%, 20=11.17%, 50=45.75% 00:17:30.274 lat (msec) : 100=2.50%, 250=2.25% 00:17:30.274 cpu : usr=99.25%, sys=0.20%, ctx=41, majf=0, minf=5593 00:17:30.274 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:17:30.274 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:30.274 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:30.274 issued rwts: total=65480,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:30.274 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:30.274 second_half: (groupid=0, jobs=1): err= 0: pid=88614: Tue Jul 23 18:35:26 2024 00:17:30.274 read: IOPS=2389, BW=9559KiB/s (9788kB/s)(256MiB/27401msec) 00:17:30.274 slat (nsec): min=3446, max=84698, avg=10279.38, stdev=3909.27 00:17:30.274 clat (usec): min=722, max=333135, avg=44320.71, stdev=31240.58 00:17:30.274 lat (usec): min=738, max=333152, avg=44330.99, stdev=31240.77 00:17:30.274 clat percentiles (msec): 00:17:30.274 | 1.00th=[ 10], 5.00th=[ 34], 10.00th=[ 36], 20.00th=[ 37], 00:17:30.274 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 37], 00:17:30.274 | 70.00th=[ 38], 80.00th=[ 44], 90.00th=[ 45], 95.00th=[ 96], 00:17:30.274 | 99.00th=[ 211], 99.50th=[ 222], 99.90th=[ 253], 99.95th=[ 292], 00:17:30.274 | 99.99th=[ 326] 00:17:30.274 write: IOPS=2394, BW=9580KiB/s (9810kB/s)(256MiB/27364msec); 0 zone resets 00:17:30.274 slat (usec): min=4, max=469, avg=10.30, stdev= 5.44 00:17:30.274 clat (usec): min=356, max=51955, avg=9202.52, stdev=8575.91 00:17:30.274 lat (usec): min=370, max=51966, avg=9212.82, stdev=8576.07 00:17:30.274 clat percentiles (usec): 00:17:30.274 | 1.00th=[ 1270], 5.00th=[ 1729], 10.00th=[ 2114], 20.00th=[ 3163], 00:17:30.274 | 30.00th=[ 5211], 40.00th=[ 6915], 50.00th=[ 7635], 60.00th=[ 8586], 00:17:30.274 | 70.00th=[ 9241], 80.00th=[10945], 90.00th=[16057], 95.00th=[28705], 00:17:30.274 | 99.00th=[46400], 99.50th=[47449], 99.90th=[49546], 99.95th=[50070], 00:17:30.274 | 99.99th=[51119] 00:17:30.274 bw ( KiB/s): min= 3496, max=46720, per=100.00%, avg=20081.54, stdev=12304.04, samples=26 00:17:30.275 iops : min= 874, max=11680, avg=5020.38, stdev=3076.01, samples=26 00:17:30.275 lat (usec) : 500=0.01%, 750=0.03%, 1000=0.11% 00:17:30.275 lat (msec) : 2=4.14%, 4=7.55%, 10=26.99%, 20=9.92%, 50=47.13% 00:17:30.275 lat (msec) : 100=1.73%, 250=2.36%, 500=0.05% 00:17:30.275 cpu : usr=99.27%, sys=0.11%, ctx=47, majf=0, minf=5537 00:17:30.275 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:17:30.275 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:30.275 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:30.275 issued rwts: total=65480,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:30.275 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:30.275 00:17:30.275 Run status group 0 (all jobs): 00:17:30.275 READ: bw=18.7MiB/s (19.6MB/s), 9559KiB/s-9626KiB/s (9788kB/s-9857kB/s), io=512MiB (536MB), run=27210-27401msec 00:17:30.275 WRITE: bw=18.7MiB/s (19.6MB/s), 9580KiB/s-9681KiB/s (9810kB/s-9913kB/s), io=512MiB (537MB), run=27079-27364msec 00:17:30.275 ----------------------------------------------------- 00:17:30.275 Suppressions used: 00:17:30.275 count bytes template 00:17:30.275 2 10 /usr/src/fio/parse.c 00:17:30.275 3 288 /usr/src/fio/iolog.c 00:17:30.275 1 8 libtcmalloc_minimal.so 00:17:30.275 1 904 libcrypto.so 00:17:30.275 ----------------------------------------------------- 00:17:30.275 00:17:30.275 18:35:28 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:17:30.275 18:35:28 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:30.275 18:35:28 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:30.275 18:35:28 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:17:30.275 18:35:28 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:17:30.275 18:35:28 ftl.ftl_fio_basic -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:30.275 18:35:28 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:30.275 18:35:28 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:17:30.275 18:35:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:17:30.275 18:35:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:17:30.275 18:35:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:30.275 18:35:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1335 -- # local sanitizers 00:17:30.275 18:35:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:30.275 18:35:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # shift 00:17:30.275 18:35:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local asan_lib= 00:17:30.275 18:35:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:17:30.275 18:35:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:30.275 18:35:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # grep libasan 00:17:30.275 18:35:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:17:30.275 18:35:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:30.275 18:35:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1342 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:30.275 18:35:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # break 00:17:30.275 18:35:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:30.275 18:35:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:17:30.275 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:17:30.275 fio-3.35 00:17:30.275 Starting 1 thread 00:17:45.194 00:17:45.194 test: (groupid=0, jobs=1): err= 0: pid=88961: Tue Jul 23 18:35:44 2024 00:17:45.194 read: IOPS=6903, BW=27.0MiB/s (28.3MB/s)(255MiB/9445msec) 00:17:45.194 slat (nsec): min=3214, max=55677, avg=9249.82, stdev=4504.27 00:17:45.194 clat (usec): min=721, max=36415, avg=18526.77, stdev=1136.26 00:17:45.194 lat (usec): min=724, max=36419, avg=18536.02, stdev=1136.16 00:17:45.194 clat percentiles (usec): 00:17:45.194 | 1.00th=[17433], 5.00th=[17695], 10.00th=[17957], 20.00th=[17957], 00:17:45.194 | 30.00th=[18220], 40.00th=[18220], 50.00th=[18482], 60.00th=[18482], 00:17:45.194 | 70.00th=[18744], 80.00th=[18744], 90.00th=[19006], 95.00th=[19268], 00:17:45.194 | 99.00th=[21890], 99.50th=[26870], 99.90th=[30278], 99.95th=[31851], 00:17:45.194 | 99.99th=[35390] 00:17:45.194 write: IOPS=11.7k, BW=45.5MiB/s (47.8MB/s)(256MiB/5621msec); 0 zone resets 00:17:45.194 slat (usec): min=4, max=458, avg= 9.52, stdev= 6.10 00:17:45.194 clat (usec): min=612, max=65543, avg=10929.05, stdev=13719.00 00:17:45.194 lat (usec): min=620, max=65552, avg=10938.57, stdev=13719.19 00:17:45.194 clat percentiles (usec): 00:17:45.194 | 1.00th=[ 1139], 5.00th=[ 1401], 10.00th=[ 1582], 20.00th=[ 1811], 00:17:45.194 | 30.00th=[ 2024], 40.00th=[ 2442], 50.00th=[ 6718], 60.00th=[ 7898], 00:17:45.194 | 70.00th=[ 8979], 80.00th=[11207], 90.00th=[40109], 95.00th=[43779], 00:17:45.194 | 99.00th=[46400], 99.50th=[47973], 99.90th=[57934], 99.95th=[58459], 00:17:45.194 | 99.99th=[61080] 00:17:45.194 bw ( KiB/s): min= 8048, max=65160, per=93.68%, avg=43690.67, stdev=15048.17, samples=12 00:17:45.194 iops : min= 2012, max=16290, avg=10922.67, stdev=3762.04, samples=12 00:17:45.194 lat (usec) : 750=0.01%, 1000=0.13% 00:17:45.194 lat (msec) : 2=14.45%, 4=6.47%, 10=16.79%, 20=52.53%, 50=9.49% 00:17:45.194 lat (msec) : 100=0.13% 00:17:45.194 cpu : usr=99.00%, sys=0.25%, ctx=32, majf=0, minf=5577 00:17:45.194 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:17:45.194 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:45.194 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:45.194 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:45.194 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:45.194 00:17:45.194 Run status group 0 (all jobs): 00:17:45.194 READ: bw=27.0MiB/s (28.3MB/s), 27.0MiB/s-27.0MiB/s (28.3MB/s-28.3MB/s), io=255MiB (267MB), run=9445-9445msec 00:17:45.194 WRITE: bw=45.5MiB/s (47.8MB/s), 45.5MiB/s-45.5MiB/s (47.8MB/s-47.8MB/s), io=256MiB (268MB), run=5621-5621msec 00:17:45.194 ----------------------------------------------------- 00:17:45.194 Suppressions used: 00:17:45.194 count bytes template 00:17:45.194 1 5 /usr/src/fio/parse.c 00:17:45.194 2 192 /usr/src/fio/iolog.c 00:17:45.194 1 8 libtcmalloc_minimal.so 00:17:45.194 1 904 libcrypto.so 00:17:45.194 ----------------------------------------------------- 00:17:45.194 00:17:45.454 18:35:45 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:17:45.454 18:35:45 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:45.454 18:35:45 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:45.454 18:35:45 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:45.454 18:35:45 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:17:45.454 18:35:45 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:17:45.454 Remove shared memory files 00:17:45.454 18:35:45 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:17:45.454 18:35:45 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:17:45.454 18:35:45 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid74233 /dev/shm/spdk_tgt_trace.pid87262 00:17:45.454 18:35:45 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:17:45.454 18:35:45 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:17:45.454 ************************************ 00:17:45.454 END TEST ftl_fio_basic 00:17:45.454 ************************************ 00:17:45.454 00:17:45.454 real 1m8.007s 00:17:45.454 user 2m34.794s 00:17:45.454 sys 0m3.554s 00:17:45.454 18:35:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:45.454 18:35:45 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:45.454 18:35:45 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:17:45.454 18:35:45 ftl -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:17:45.454 18:35:45 ftl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:45.454 18:35:45 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:45.454 ************************************ 00:17:45.454 START TEST ftl_bdevperf 00:17:45.454 ************************************ 00:17:45.454 18:35:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:17:45.714 * Looking for test storage... 00:17:45.714 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:45.714 18:35:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:45.714 18:35:45 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:17:45.714 18:35:45 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:45.714 18:35:45 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:45.714 18:35:45 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:45.714 18:35:45 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:45.714 18:35:45 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:45.714 18:35:45 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:45.714 18:35:45 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:45.714 18:35:45 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:45.714 18:35:45 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:45.714 18:35:45 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:45.714 18:35:45 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:45.714 18:35:45 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:45.714 18:35:45 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:45.714 18:35:45 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:45.714 18:35:45 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:45.714 18:35:45 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:45.714 18:35:45 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:45.714 18:35:45 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:45.714 18:35:45 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:45.714 18:35:45 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:45.714 18:35:45 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:45.714 18:35:45 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:45.714 18:35:45 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:45.714 18:35:45 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:45.714 18:35:45 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:45.714 18:35:45 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:45.714 18:35:45 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:45.714 18:35:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:17:45.714 18:35:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:17:45.714 18:35:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:17:45.714 18:35:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:45.714 18:35:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:17:45.714 18:35:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # timing_enter '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:17:45.714 18:35:45 ftl.ftl_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:45.714 18:35:45 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:45.714 18:35:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@19 -- # bdevperf_pid=89204 00:17:45.714 18:35:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:17:45.714 18:35:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:17:45.714 18:35:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # waitforlisten 89204 00:17:45.714 18:35:45 ftl.ftl_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 89204 ']' 00:17:45.714 18:35:45 ftl.ftl_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:45.714 18:35:45 ftl.ftl_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:45.714 18:35:45 ftl.ftl_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:45.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:45.714 18:35:45 ftl.ftl_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:45.714 18:35:45 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:45.714 [2024-07-23 18:35:45.636981] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:17:45.714 [2024-07-23 18:35:45.637191] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89204 ] 00:17:45.974 [2024-07-23 18:35:45.805927] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:45.974 [2024-07-23 18:35:45.875723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:46.543 18:35:46 ftl.ftl_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:46.543 18:35:46 ftl.ftl_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:17:46.543 18:35:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:46.543 18:35:46 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:17:46.543 18:35:46 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:46.543 18:35:46 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:17:46.543 18:35:46 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:17:46.543 18:35:46 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:46.803 18:35:46 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:46.803 18:35:46 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:17:46.803 18:35:46 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:46.803 18:35:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1374 -- # local bdev_name=nvme0n1 00:17:46.803 18:35:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1375 -- # local bdev_info 00:17:46.803 18:35:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1376 -- # local bs 00:17:46.803 18:35:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1377 -- # local nb 00:17:46.803 18:35:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:47.063 18:35:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:17:47.063 { 00:17:47.063 "name": "nvme0n1", 00:17:47.063 "aliases": [ 00:17:47.063 "6249955a-8f49-4b4d-a5cb-2949c09c7ec8" 00:17:47.063 ], 00:17:47.063 "product_name": "NVMe disk", 00:17:47.063 "block_size": 4096, 00:17:47.063 "num_blocks": 1310720, 00:17:47.063 "uuid": "6249955a-8f49-4b4d-a5cb-2949c09c7ec8", 00:17:47.063 "assigned_rate_limits": { 00:17:47.063 "rw_ios_per_sec": 0, 00:17:47.063 "rw_mbytes_per_sec": 0, 00:17:47.063 "r_mbytes_per_sec": 0, 00:17:47.063 "w_mbytes_per_sec": 0 00:17:47.063 }, 00:17:47.063 "claimed": true, 00:17:47.063 "claim_type": "read_many_write_one", 00:17:47.063 "zoned": false, 00:17:47.063 "supported_io_types": { 00:17:47.063 "read": true, 00:17:47.063 "write": true, 00:17:47.063 "unmap": true, 00:17:47.063 "write_zeroes": true, 00:17:47.063 "flush": true, 00:17:47.063 "reset": true, 00:17:47.063 "compare": true, 00:17:47.063 "compare_and_write": false, 00:17:47.063 "abort": true, 00:17:47.063 "nvme_admin": true, 00:17:47.063 "nvme_io": true 00:17:47.063 }, 00:17:47.063 "driver_specific": { 00:17:47.063 "nvme": [ 00:17:47.063 { 00:17:47.063 "pci_address": "0000:00:11.0", 00:17:47.063 "trid": { 00:17:47.063 "trtype": "PCIe", 00:17:47.063 "traddr": "0000:00:11.0" 00:17:47.063 }, 00:17:47.063 "ctrlr_data": { 00:17:47.063 "cntlid": 0, 00:17:47.063 "vendor_id": "0x1b36", 00:17:47.063 "model_number": "QEMU NVMe Ctrl", 00:17:47.063 "serial_number": "12341", 00:17:47.063 "firmware_revision": "8.0.0", 00:17:47.063 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:47.063 "oacs": { 00:17:47.063 "security": 0, 00:17:47.063 "format": 1, 00:17:47.063 "firmware": 0, 00:17:47.063 "ns_manage": 1 00:17:47.063 }, 00:17:47.063 "multi_ctrlr": false, 00:17:47.063 "ana_reporting": false 00:17:47.063 }, 00:17:47.063 "vs": { 00:17:47.063 "nvme_version": "1.4" 00:17:47.063 }, 00:17:47.063 "ns_data": { 00:17:47.063 "id": 1, 00:17:47.063 "can_share": false 00:17:47.063 } 00:17:47.063 } 00:17:47.063 ], 00:17:47.063 "mp_policy": "active_passive" 00:17:47.063 } 00:17:47.063 } 00:17:47.063 ]' 00:17:47.063 18:35:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:17:47.063 18:35:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # bs=4096 00:17:47.063 18:35:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:17:47.063 18:35:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # nb=1310720 00:17:47.063 18:35:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bdev_size=5120 00:17:47.063 18:35:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # echo 5120 00:17:47.063 18:35:47 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:17:47.063 18:35:47 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:47.063 18:35:47 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:17:47.063 18:35:47 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:47.063 18:35:47 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:47.323 18:35:47 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=af43614e-e601-48ad-97ce-fc8ba3a2a3da 00:17:47.323 18:35:47 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:17:47.323 18:35:47 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u af43614e-e601-48ad-97ce-fc8ba3a2a3da 00:17:47.583 18:35:47 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:47.583 18:35:47 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=22e22162-3164-459c-952a-57b0f22aa5e0 00:17:47.583 18:35:47 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 22e22162-3164-459c-952a-57b0f22aa5e0 00:17:47.842 18:35:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # split_bdev=f090fcef-daf4-4175-b0fc-7c5abb2e58ac 00:17:47.842 18:35:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@24 -- # create_nv_cache_bdev nvc0 0000:00:10.0 f090fcef-daf4-4175-b0fc-7c5abb2e58ac 00:17:47.842 18:35:47 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:17:47.842 18:35:47 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:47.842 18:35:47 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=f090fcef-daf4-4175-b0fc-7c5abb2e58ac 00:17:47.842 18:35:47 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:17:47.842 18:35:47 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size f090fcef-daf4-4175-b0fc-7c5abb2e58ac 00:17:47.842 18:35:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1374 -- # local bdev_name=f090fcef-daf4-4175-b0fc-7c5abb2e58ac 00:17:47.842 18:35:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1375 -- # local bdev_info 00:17:47.842 18:35:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1376 -- # local bs 00:17:47.842 18:35:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1377 -- # local nb 00:17:47.842 18:35:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f090fcef-daf4-4175-b0fc-7c5abb2e58ac 00:17:48.102 18:35:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:17:48.102 { 00:17:48.102 "name": "f090fcef-daf4-4175-b0fc-7c5abb2e58ac", 00:17:48.102 "aliases": [ 00:17:48.102 "lvs/nvme0n1p0" 00:17:48.102 ], 00:17:48.102 "product_name": "Logical Volume", 00:17:48.102 "block_size": 4096, 00:17:48.102 "num_blocks": 26476544, 00:17:48.102 "uuid": "f090fcef-daf4-4175-b0fc-7c5abb2e58ac", 00:17:48.102 "assigned_rate_limits": { 00:17:48.102 "rw_ios_per_sec": 0, 00:17:48.102 "rw_mbytes_per_sec": 0, 00:17:48.102 "r_mbytes_per_sec": 0, 00:17:48.102 "w_mbytes_per_sec": 0 00:17:48.102 }, 00:17:48.102 "claimed": false, 00:17:48.102 "zoned": false, 00:17:48.102 "supported_io_types": { 00:17:48.102 "read": true, 00:17:48.102 "write": true, 00:17:48.102 "unmap": true, 00:17:48.102 "write_zeroes": true, 00:17:48.102 "flush": false, 00:17:48.102 "reset": true, 00:17:48.102 "compare": false, 00:17:48.102 "compare_and_write": false, 00:17:48.102 "abort": false, 00:17:48.102 "nvme_admin": false, 00:17:48.102 "nvme_io": false 00:17:48.102 }, 00:17:48.102 "driver_specific": { 00:17:48.102 "lvol": { 00:17:48.102 "lvol_store_uuid": "22e22162-3164-459c-952a-57b0f22aa5e0", 00:17:48.102 "base_bdev": "nvme0n1", 00:17:48.102 "thin_provision": true, 00:17:48.102 "num_allocated_clusters": 0, 00:17:48.102 "snapshot": false, 00:17:48.102 "clone": false, 00:17:48.102 "esnap_clone": false 00:17:48.102 } 00:17:48.102 } 00:17:48.102 } 00:17:48.102 ]' 00:17:48.102 18:35:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:17:48.102 18:35:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # bs=4096 00:17:48.102 18:35:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:17:48.102 18:35:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # nb=26476544 00:17:48.102 18:35:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:17:48.102 18:35:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # echo 103424 00:17:48.102 18:35:48 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:17:48.102 18:35:48 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:17:48.102 18:35:48 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:48.363 18:35:48 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:48.363 18:35:48 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:48.363 18:35:48 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size f090fcef-daf4-4175-b0fc-7c5abb2e58ac 00:17:48.363 18:35:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1374 -- # local bdev_name=f090fcef-daf4-4175-b0fc-7c5abb2e58ac 00:17:48.363 18:35:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1375 -- # local bdev_info 00:17:48.363 18:35:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1376 -- # local bs 00:17:48.363 18:35:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1377 -- # local nb 00:17:48.363 18:35:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f090fcef-daf4-4175-b0fc-7c5abb2e58ac 00:17:48.623 18:35:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:17:48.623 { 00:17:48.623 "name": "f090fcef-daf4-4175-b0fc-7c5abb2e58ac", 00:17:48.623 "aliases": [ 00:17:48.623 "lvs/nvme0n1p0" 00:17:48.623 ], 00:17:48.623 "product_name": "Logical Volume", 00:17:48.623 "block_size": 4096, 00:17:48.623 "num_blocks": 26476544, 00:17:48.623 "uuid": "f090fcef-daf4-4175-b0fc-7c5abb2e58ac", 00:17:48.623 "assigned_rate_limits": { 00:17:48.623 "rw_ios_per_sec": 0, 00:17:48.623 "rw_mbytes_per_sec": 0, 00:17:48.623 "r_mbytes_per_sec": 0, 00:17:48.623 "w_mbytes_per_sec": 0 00:17:48.623 }, 00:17:48.623 "claimed": false, 00:17:48.623 "zoned": false, 00:17:48.623 "supported_io_types": { 00:17:48.623 "read": true, 00:17:48.623 "write": true, 00:17:48.623 "unmap": true, 00:17:48.623 "write_zeroes": true, 00:17:48.623 "flush": false, 00:17:48.623 "reset": true, 00:17:48.623 "compare": false, 00:17:48.623 "compare_and_write": false, 00:17:48.623 "abort": false, 00:17:48.623 "nvme_admin": false, 00:17:48.623 "nvme_io": false 00:17:48.623 }, 00:17:48.623 "driver_specific": { 00:17:48.623 "lvol": { 00:17:48.623 "lvol_store_uuid": "22e22162-3164-459c-952a-57b0f22aa5e0", 00:17:48.623 "base_bdev": "nvme0n1", 00:17:48.623 "thin_provision": true, 00:17:48.623 "num_allocated_clusters": 0, 00:17:48.623 "snapshot": false, 00:17:48.623 "clone": false, 00:17:48.623 "esnap_clone": false 00:17:48.623 } 00:17:48.623 } 00:17:48.623 } 00:17:48.623 ]' 00:17:48.623 18:35:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:17:48.623 18:35:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # bs=4096 00:17:48.623 18:35:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:17:48.623 18:35:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # nb=26476544 00:17:48.623 18:35:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:17:48.623 18:35:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # echo 103424 00:17:48.623 18:35:48 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:17:48.623 18:35:48 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:48.883 18:35:48 ftl.ftl_bdevperf -- ftl/bdevperf.sh@24 -- # nv_cache=nvc0n1p0 00:17:48.883 18:35:48 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # get_bdev_size f090fcef-daf4-4175-b0fc-7c5abb2e58ac 00:17:48.883 18:35:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1374 -- # local bdev_name=f090fcef-daf4-4175-b0fc-7c5abb2e58ac 00:17:48.883 18:35:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1375 -- # local bdev_info 00:17:48.883 18:35:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1376 -- # local bs 00:17:48.883 18:35:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1377 -- # local nb 00:17:48.883 18:35:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f090fcef-daf4-4175-b0fc-7c5abb2e58ac 00:17:48.883 18:35:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:17:48.883 { 00:17:48.883 "name": "f090fcef-daf4-4175-b0fc-7c5abb2e58ac", 00:17:48.883 "aliases": [ 00:17:48.883 "lvs/nvme0n1p0" 00:17:48.883 ], 00:17:48.883 "product_name": "Logical Volume", 00:17:48.883 "block_size": 4096, 00:17:48.883 "num_blocks": 26476544, 00:17:48.883 "uuid": "f090fcef-daf4-4175-b0fc-7c5abb2e58ac", 00:17:48.883 "assigned_rate_limits": { 00:17:48.883 "rw_ios_per_sec": 0, 00:17:48.883 "rw_mbytes_per_sec": 0, 00:17:48.883 "r_mbytes_per_sec": 0, 00:17:48.883 "w_mbytes_per_sec": 0 00:17:48.883 }, 00:17:48.883 "claimed": false, 00:17:48.883 "zoned": false, 00:17:48.883 "supported_io_types": { 00:17:48.883 "read": true, 00:17:48.883 "write": true, 00:17:48.883 "unmap": true, 00:17:48.883 "write_zeroes": true, 00:17:48.883 "flush": false, 00:17:48.883 "reset": true, 00:17:48.883 "compare": false, 00:17:48.883 "compare_and_write": false, 00:17:48.883 "abort": false, 00:17:48.883 "nvme_admin": false, 00:17:48.883 "nvme_io": false 00:17:48.883 }, 00:17:48.883 "driver_specific": { 00:17:48.883 "lvol": { 00:17:48.883 "lvol_store_uuid": "22e22162-3164-459c-952a-57b0f22aa5e0", 00:17:48.883 "base_bdev": "nvme0n1", 00:17:48.883 "thin_provision": true, 00:17:48.883 "num_allocated_clusters": 0, 00:17:48.883 "snapshot": false, 00:17:48.883 "clone": false, 00:17:48.883 "esnap_clone": false 00:17:48.883 } 00:17:48.883 } 00:17:48.883 } 00:17:48.883 ]' 00:17:48.883 18:35:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:17:48.883 18:35:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # bs=4096 00:17:48.883 18:35:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:17:49.144 18:35:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # nb=26476544 00:17:49.144 18:35:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:17:49.144 18:35:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # echo 103424 00:17:49.144 18:35:48 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # l2p_dram_size_mb=20 00:17:49.144 18:35:48 ftl.ftl_bdevperf -- ftl/bdevperf.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d f090fcef-daf4-4175-b0fc-7c5abb2e58ac -c nvc0n1p0 --l2p_dram_limit 20 00:17:49.144 [2024-07-23 18:35:49.129664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.144 [2024-07-23 18:35:49.129721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:49.144 [2024-07-23 18:35:49.129737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:49.144 [2024-07-23 18:35:49.129755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.144 [2024-07-23 18:35:49.129844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.144 [2024-07-23 18:35:49.129866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:49.144 [2024-07-23 18:35:49.129874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:17:49.144 [2024-07-23 18:35:49.129899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.144 [2024-07-23 18:35:49.129922] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:49.144 [2024-07-23 18:35:49.130327] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:49.144 [2024-07-23 18:35:49.130347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.144 [2024-07-23 18:35:49.130361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:49.144 [2024-07-23 18:35:49.130369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.437 ms 00:17:49.144 [2024-07-23 18:35:49.130379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.144 [2024-07-23 18:35:49.130443] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID fca981c7-c912-4932-b7c1-7f2aa50083fe 00:17:49.144 [2024-07-23 18:35:49.132888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.144 [2024-07-23 18:35:49.132932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:49.144 [2024-07-23 18:35:49.132944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:17:49.144 [2024-07-23 18:35:49.132955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.144 [2024-07-23 18:35:49.147045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.144 [2024-07-23 18:35:49.147075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:49.144 [2024-07-23 18:35:49.147095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.060 ms 00:17:49.144 [2024-07-23 18:35:49.147104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.144 [2024-07-23 18:35:49.147211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.144 [2024-07-23 18:35:49.147224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:49.144 [2024-07-23 18:35:49.147235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:17:49.144 [2024-07-23 18:35:49.147249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.144 [2024-07-23 18:35:49.147326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.144 [2024-07-23 18:35:49.147336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:49.144 [2024-07-23 18:35:49.147346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:17:49.144 [2024-07-23 18:35:49.147361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.144 [2024-07-23 18:35:49.147390] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:49.144 [2024-07-23 18:35:49.150168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.144 [2024-07-23 18:35:49.150209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:49.144 [2024-07-23 18:35:49.150218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.797 ms 00:17:49.144 [2024-07-23 18:35:49.150228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.144 [2024-07-23 18:35:49.150261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.144 [2024-07-23 18:35:49.150273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:49.144 [2024-07-23 18:35:49.150280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:49.144 [2024-07-23 18:35:49.150292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.144 [2024-07-23 18:35:49.150316] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:49.144 [2024-07-23 18:35:49.150455] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:49.144 [2024-07-23 18:35:49.150468] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:49.144 [2024-07-23 18:35:49.150482] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:17:49.144 [2024-07-23 18:35:49.150491] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:49.144 [2024-07-23 18:35:49.150502] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:49.144 [2024-07-23 18:35:49.150512] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:49.144 [2024-07-23 18:35:49.150522] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:49.144 [2024-07-23 18:35:49.150533] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:49.145 [2024-07-23 18:35:49.150543] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:49.145 [2024-07-23 18:35:49.150557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.145 [2024-07-23 18:35:49.150581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:49.145 [2024-07-23 18:35:49.150590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.244 ms 00:17:49.145 [2024-07-23 18:35:49.150599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.145 [2024-07-23 18:35:49.150672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.145 [2024-07-23 18:35:49.150686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:49.145 [2024-07-23 18:35:49.150701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:17:49.145 [2024-07-23 18:35:49.150716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.145 [2024-07-23 18:35:49.150795] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:49.145 [2024-07-23 18:35:49.150808] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:49.145 [2024-07-23 18:35:49.150816] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:49.145 [2024-07-23 18:35:49.150829] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:49.145 [2024-07-23 18:35:49.150836] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:49.145 [2024-07-23 18:35:49.150845] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:49.145 [2024-07-23 18:35:49.150853] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:49.145 [2024-07-23 18:35:49.150861] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:49.145 [2024-07-23 18:35:49.150869] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:49.145 [2024-07-23 18:35:49.150878] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:49.145 [2024-07-23 18:35:49.150884] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:49.145 [2024-07-23 18:35:49.150895] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:49.145 [2024-07-23 18:35:49.150902] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:49.145 [2024-07-23 18:35:49.150913] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:49.145 [2024-07-23 18:35:49.150920] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:17:49.145 [2024-07-23 18:35:49.150929] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:49.145 [2024-07-23 18:35:49.150936] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:49.145 [2024-07-23 18:35:49.150945] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:17:49.145 [2024-07-23 18:35:49.150951] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:49.145 [2024-07-23 18:35:49.150963] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:49.145 [2024-07-23 18:35:49.150970] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:49.145 [2024-07-23 18:35:49.150979] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:49.145 [2024-07-23 18:35:49.150985] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:49.145 [2024-07-23 18:35:49.150993] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:49.145 [2024-07-23 18:35:49.151000] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:49.145 [2024-07-23 18:35:49.151007] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:49.145 [2024-07-23 18:35:49.151014] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:49.145 [2024-07-23 18:35:49.151023] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:49.145 [2024-07-23 18:35:49.151029] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:49.145 [2024-07-23 18:35:49.151040] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:17:49.145 [2024-07-23 18:35:49.151045] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:49.145 [2024-07-23 18:35:49.151054] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:49.145 [2024-07-23 18:35:49.151060] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:17:49.145 [2024-07-23 18:35:49.151068] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:49.145 [2024-07-23 18:35:49.151073] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:49.145 [2024-07-23 18:35:49.151084] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:17:49.145 [2024-07-23 18:35:49.151091] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:49.145 [2024-07-23 18:35:49.151098] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:49.145 [2024-07-23 18:35:49.151105] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:17:49.145 [2024-07-23 18:35:49.151113] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:49.145 [2024-07-23 18:35:49.151119] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:49.145 [2024-07-23 18:35:49.151127] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:17:49.145 [2024-07-23 18:35:49.151133] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:49.145 [2024-07-23 18:35:49.151141] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:49.145 [2024-07-23 18:35:49.151149] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:49.145 [2024-07-23 18:35:49.151169] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:49.145 [2024-07-23 18:35:49.151176] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:49.145 [2024-07-23 18:35:49.151202] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:49.145 [2024-07-23 18:35:49.151208] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:49.145 [2024-07-23 18:35:49.151217] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:49.145 [2024-07-23 18:35:49.151224] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:49.145 [2024-07-23 18:35:49.151234] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:49.145 [2024-07-23 18:35:49.151241] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:49.145 [2024-07-23 18:35:49.151255] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:49.145 [2024-07-23 18:35:49.151265] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:49.145 [2024-07-23 18:35:49.151278] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:49.145 [2024-07-23 18:35:49.151286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:17:49.145 [2024-07-23 18:35:49.151296] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:17:49.145 [2024-07-23 18:35:49.151306] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:17:49.145 [2024-07-23 18:35:49.151315] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:17:49.145 [2024-07-23 18:35:49.151322] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:17:49.145 [2024-07-23 18:35:49.151334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:17:49.145 [2024-07-23 18:35:49.151341] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:17:49.145 [2024-07-23 18:35:49.151350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:17:49.145 [2024-07-23 18:35:49.151357] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:17:49.145 [2024-07-23 18:35:49.151368] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:17:49.145 [2024-07-23 18:35:49.151375] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:17:49.145 [2024-07-23 18:35:49.151386] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:17:49.145 [2024-07-23 18:35:49.151412] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:17:49.145 [2024-07-23 18:35:49.151422] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:49.145 [2024-07-23 18:35:49.151430] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:49.145 [2024-07-23 18:35:49.151453] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:49.145 [2024-07-23 18:35:49.151461] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:49.145 [2024-07-23 18:35:49.151472] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:49.145 [2024-07-23 18:35:49.151480] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:49.145 [2024-07-23 18:35:49.151490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.145 [2024-07-23 18:35:49.151498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:49.145 [2024-07-23 18:35:49.151511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.744 ms 00:17:49.145 [2024-07-23 18:35:49.151519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.145 [2024-07-23 18:35:49.151587] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:49.145 [2024-07-23 18:35:49.151599] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:53.364 [2024-07-23 18:35:52.809417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.364 [2024-07-23 18:35:52.809503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:53.364 [2024-07-23 18:35:52.809521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3664.865 ms 00:17:53.364 [2024-07-23 18:35:52.809529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.364 [2024-07-23 18:35:52.838760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.364 [2024-07-23 18:35:52.838900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:53.364 [2024-07-23 18:35:52.838979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.109 ms 00:17:53.364 [2024-07-23 18:35:52.839008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.364 [2024-07-23 18:35:52.839387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.364 [2024-07-23 18:35:52.839458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:53.364 [2024-07-23 18:35:52.839521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.215 ms 00:17:53.364 [2024-07-23 18:35:52.839549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.364 [2024-07-23 18:35:52.861746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.364 [2024-07-23 18:35:52.861798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:53.364 [2024-07-23 18:35:52.861835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.025 ms 00:17:53.364 [2024-07-23 18:35:52.861849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.364 [2024-07-23 18:35:52.861903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.364 [2024-07-23 18:35:52.861922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:53.364 [2024-07-23 18:35:52.861939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:53.364 [2024-07-23 18:35:52.861952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.364 [2024-07-23 18:35:52.862839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.364 [2024-07-23 18:35:52.862873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:53.364 [2024-07-23 18:35:52.862892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.818 ms 00:17:53.364 [2024-07-23 18:35:52.862904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.364 [2024-07-23 18:35:52.863075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.364 [2024-07-23 18:35:52.863115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:53.364 [2024-07-23 18:35:52.863136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.135 ms 00:17:53.364 [2024-07-23 18:35:52.863148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.364 [2024-07-23 18:35:52.873834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.364 [2024-07-23 18:35:52.873869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:53.364 [2024-07-23 18:35:52.873884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.663 ms 00:17:53.364 [2024-07-23 18:35:52.873894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.364 [2024-07-23 18:35:52.883224] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:17:53.364 [2024-07-23 18:35:52.892522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.364 [2024-07-23 18:35:52.892562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:53.364 [2024-07-23 18:35:52.892583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.570 ms 00:17:53.364 [2024-07-23 18:35:52.892601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.364 [2024-07-23 18:35:52.975297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.364 [2024-07-23 18:35:52.975376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:53.364 [2024-07-23 18:35:52.975391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.824 ms 00:17:53.364 [2024-07-23 18:35:52.975418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.364 [2024-07-23 18:35:52.975638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.364 [2024-07-23 18:35:52.975661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:53.364 [2024-07-23 18:35:52.975671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.178 ms 00:17:53.364 [2024-07-23 18:35:52.975680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.364 [2024-07-23 18:35:52.979366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.364 [2024-07-23 18:35:52.979408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:53.364 [2024-07-23 18:35:52.979419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.671 ms 00:17:53.364 [2024-07-23 18:35:52.979433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.364 [2024-07-23 18:35:52.982417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.364 [2024-07-23 18:35:52.982454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:53.364 [2024-07-23 18:35:52.982465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.957 ms 00:17:53.364 [2024-07-23 18:35:52.982474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.364 [2024-07-23 18:35:52.982765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.364 [2024-07-23 18:35:52.982786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:53.364 [2024-07-23 18:35:52.982795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:17:53.364 [2024-07-23 18:35:52.982808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.364 [2024-07-23 18:35:53.026661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.364 [2024-07-23 18:35:53.026716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:53.364 [2024-07-23 18:35:53.026730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.914 ms 00:17:53.364 [2024-07-23 18:35:53.026745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.364 [2024-07-23 18:35:53.032101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.364 [2024-07-23 18:35:53.032142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:53.364 [2024-07-23 18:35:53.032161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.320 ms 00:17:53.364 [2024-07-23 18:35:53.032172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.364 [2024-07-23 18:35:53.035359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.365 [2024-07-23 18:35:53.035396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:53.365 [2024-07-23 18:35:53.035405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.153 ms 00:17:53.365 [2024-07-23 18:35:53.035415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.365 [2024-07-23 18:35:53.038889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.365 [2024-07-23 18:35:53.038923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:53.365 [2024-07-23 18:35:53.038932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.450 ms 00:17:53.365 [2024-07-23 18:35:53.038946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.365 [2024-07-23 18:35:53.038981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.365 [2024-07-23 18:35:53.038993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:53.365 [2024-07-23 18:35:53.039003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:53.365 [2024-07-23 18:35:53.039013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.365 [2024-07-23 18:35:53.039080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.365 [2024-07-23 18:35:53.039092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:53.365 [2024-07-23 18:35:53.039100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:17:53.365 [2024-07-23 18:35:53.039110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.365 [2024-07-23 18:35:53.040520] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3917.920 ms, result 0 00:17:53.365 { 00:17:53.365 "name": "ftl0", 00:17:53.365 "uuid": "fca981c7-c912-4932-b7c1-7f2aa50083fe" 00:17:53.365 } 00:17:53.365 18:35:53 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:17:53.365 18:35:53 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # jq -r .name 00:17:53.365 18:35:53 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # grep -qw ftl0 00:17:53.365 18:35:53 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:17:53.365 [2024-07-23 18:35:53.327833] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:17:53.365 I/O size of 69632 is greater than zero copy threshold (65536). 00:17:53.365 Zero copy mechanism will not be used. 00:17:53.365 Running I/O for 4 seconds... 00:17:57.554 00:17:57.554 Latency(us) 00:17:57.554 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:57.554 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:17:57.554 ftl0 : 4.00 1746.23 115.96 0.00 0.00 599.33 213.74 1051.72 00:17:57.554 =================================================================================================================== 00:17:57.554 Total : 1746.23 115.96 0.00 0.00 599.33 213.74 1051.72 00:17:57.554 0 00:17:57.554 [2024-07-23 18:35:57.327669] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:17:57.554 18:35:57 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:17:57.554 [2024-07-23 18:35:57.442898] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:17:57.554 Running I/O for 4 seconds... 00:18:01.754 00:18:01.754 Latency(us) 00:18:01.754 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:01.754 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:18:01.754 ftl0 : 4.01 10429.83 40.74 0.00 0.00 12247.26 300.49 23352.57 00:18:01.754 =================================================================================================================== 00:18:01.754 Total : 10429.83 40.74 0.00 0.00 12247.26 0.00 23352.57 00:18:01.754 [2024-07-23 18:36:01.454924] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:18:01.754 0 00:18:01.754 18:36:01 ftl.ftl_bdevperf -- ftl/bdevperf.sh@33 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:18:01.754 [2024-07-23 18:36:01.563760] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:18:01.754 Running I/O for 4 seconds... 00:18:05.945 00:18:05.945 Latency(us) 00:18:05.945 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:05.945 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:05.945 Verification LBA range: start 0x0 length 0x1400000 00:18:05.945 ftl0 : 4.01 8132.04 31.77 0.00 0.00 15690.74 264.72 38692.00 00:18:05.945 =================================================================================================================== 00:18:05.945 Total : 8132.04 31.77 0.00 0.00 15690.74 0.00 38692.00 00:18:05.945 0 00:18:05.945 [2024-07-23 18:36:05.572809] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:18:05.945 18:36:05 ftl.ftl_bdevperf -- ftl/bdevperf.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:18:05.945 [2024-07-23 18:36:05.755463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.945 [2024-07-23 18:36:05.755522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:05.945 [2024-07-23 18:36:05.755537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:05.945 [2024-07-23 18:36:05.755547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.945 [2024-07-23 18:36:05.755585] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:05.945 [2024-07-23 18:36:05.756810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.945 [2024-07-23 18:36:05.756830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:05.945 [2024-07-23 18:36:05.756842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.212 ms 00:18:05.945 [2024-07-23 18:36:05.756849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.945 [2024-07-23 18:36:05.758704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.945 [2024-07-23 18:36:05.758739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:05.945 [2024-07-23 18:36:05.758752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.834 ms 00:18:05.945 [2024-07-23 18:36:05.758760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.945 [2024-07-23 18:36:05.971149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.945 [2024-07-23 18:36:05.971213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:05.945 [2024-07-23 18:36:05.971237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 212.765 ms 00:18:05.945 [2024-07-23 18:36:05.971247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.945 [2024-07-23 18:36:05.976317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.945 [2024-07-23 18:36:05.976345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:05.945 [2024-07-23 18:36:05.976358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.039 ms 00:18:05.945 [2024-07-23 18:36:05.976366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.945 [2024-07-23 18:36:05.978948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.945 [2024-07-23 18:36:05.978977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:05.945 [2024-07-23 18:36:05.978989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.517 ms 00:18:05.945 [2024-07-23 18:36:05.978997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.945 [2024-07-23 18:36:05.984286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.945 [2024-07-23 18:36:05.984320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:05.945 [2024-07-23 18:36:05.984332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.268 ms 00:18:05.945 [2024-07-23 18:36:05.984340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.945 [2024-07-23 18:36:05.984450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.945 [2024-07-23 18:36:05.984462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:05.945 [2024-07-23 18:36:05.984473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:18:05.945 [2024-07-23 18:36:05.984480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.945 [2024-07-23 18:36:05.986631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.946 [2024-07-23 18:36:05.986658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:05.946 [2024-07-23 18:36:05.986669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.135 ms 00:18:05.946 [2024-07-23 18:36:05.986677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.946 [2024-07-23 18:36:05.988286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.946 [2024-07-23 18:36:05.988314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:05.946 [2024-07-23 18:36:05.988325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.565 ms 00:18:05.946 [2024-07-23 18:36:05.988333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.946 [2024-07-23 18:36:05.989637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.946 [2024-07-23 18:36:05.989663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:05.946 [2024-07-23 18:36:05.989675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.278 ms 00:18:05.946 [2024-07-23 18:36:05.989681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.946 [2024-07-23 18:36:05.990696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.946 [2024-07-23 18:36:05.990724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:05.946 [2024-07-23 18:36:05.990736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.962 ms 00:18:05.946 [2024-07-23 18:36:05.990743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.946 [2024-07-23 18:36:05.990769] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:05.946 [2024-07-23 18:36:05.990789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:05.946 [2024-07-23 18:36:05.990804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:05.946 [2024-07-23 18:36:05.990812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:05.946 [2024-07-23 18:36:05.990832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:05.946 [2024-07-23 18:36:05.990840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:05.946 [2024-07-23 18:36:05.990850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:05.946 [2024-07-23 18:36:05.990857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:05.946 [2024-07-23 18:36:05.990867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:05.946 [2024-07-23 18:36:05.990874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:05.946 [2024-07-23 18:36:05.990885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:05.946 [2024-07-23 18:36:05.990893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:05.946 [2024-07-23 18:36:05.990905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:05.946 [2024-07-23 18:36:05.990913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:05.946 [2024-07-23 18:36:05.990939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:05.946 [2024-07-23 18:36:05.990946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:05.946 [2024-07-23 18:36:05.990955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:05.946 [2024-07-23 18:36:05.990963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:05.946 [2024-07-23 18:36:05.990972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:05.946 [2024-07-23 18:36:05.990979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:05.946 [2024-07-23 18:36:05.990989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:05.946 [2024-07-23 18:36:05.990997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:05.946 [2024-07-23 18:36:05.991008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:05.946 [2024-07-23 18:36:05.991015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:05.946 [2024-07-23 18:36:05.991025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:05.946 [2024-07-23 18:36:05.991032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:05.946 [2024-07-23 18:36:05.991041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:05.946 [2024-07-23 18:36:05.991049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:05.946 [2024-07-23 18:36:05.991064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:05.946 [2024-07-23 18:36:05.991073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:05.946 [2024-07-23 18:36:05.991083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:05.946 [2024-07-23 18:36:05.991091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:05.946 [2024-07-23 18:36:05.991102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:05.946 [2024-07-23 18:36:05.991110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:05.946 [2024-07-23 18:36:05.991120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:05.946 [2024-07-23 18:36:05.991128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:05.946 [2024-07-23 18:36:05.991138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:05.946 [2024-07-23 18:36:05.991146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:05.946 [2024-07-23 18:36:05.991156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:05.946 [2024-07-23 18:36:05.991163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:05.946 [2024-07-23 18:36:05.991173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:05.946 [2024-07-23 18:36:05.991187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:05.946 [2024-07-23 18:36:05.991197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:05.946 [2024-07-23 18:36:05.991204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:05.946 [2024-07-23 18:36:05.991216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:05.946 [2024-07-23 18:36:05.991223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:05.946 [2024-07-23 18:36:05.991233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:05.946 [2024-07-23 18:36:05.991258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:05.946 [2024-07-23 18:36:05.991269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:05.946 [2024-07-23 18:36:05.991277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:05.946 [2024-07-23 18:36:05.991288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:05.946 [2024-07-23 18:36:05.991296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:05.946 [2024-07-23 18:36:05.991306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:05.946 [2024-07-23 18:36:05.991314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:05.946 [2024-07-23 18:36:05.991324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:05.946 [2024-07-23 18:36:05.991331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:05.946 [2024-07-23 18:36:05.991343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:05.946 [2024-07-23 18:36:05.991352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:05.946 [2024-07-23 18:36:05.991362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:05.946 [2024-07-23 18:36:05.991370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:05.946 [2024-07-23 18:36:05.991384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:05.946 [2024-07-23 18:36:05.991391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:05.946 [2024-07-23 18:36:05.991402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:05.946 [2024-07-23 18:36:05.991409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:05.946 [2024-07-23 18:36:05.991419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:05.946 [2024-07-23 18:36:05.991432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:05.946 [2024-07-23 18:36:05.991442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:05.946 [2024-07-23 18:36:05.991449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:05.946 [2024-07-23 18:36:05.991460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:05.946 [2024-07-23 18:36:05.991467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:05.946 [2024-07-23 18:36:05.991477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:05.946 [2024-07-23 18:36:05.991485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:05.946 [2024-07-23 18:36:05.991495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:05.946 [2024-07-23 18:36:05.991503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:05.946 [2024-07-23 18:36:05.991513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:05.946 [2024-07-23 18:36:05.991520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:05.947 [2024-07-23 18:36:05.991534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:05.947 [2024-07-23 18:36:05.991542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:05.947 [2024-07-23 18:36:05.991552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:05.947 [2024-07-23 18:36:05.991561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:05.947 [2024-07-23 18:36:05.991571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:05.947 [2024-07-23 18:36:05.991591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:05.947 [2024-07-23 18:36:05.991617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:05.947 [2024-07-23 18:36:05.991626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:05.947 [2024-07-23 18:36:05.991636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:05.947 [2024-07-23 18:36:05.991644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:05.947 [2024-07-23 18:36:05.991654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:05.947 [2024-07-23 18:36:05.991662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:05.947 [2024-07-23 18:36:05.991672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:05.947 [2024-07-23 18:36:05.991681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:05.947 [2024-07-23 18:36:05.991691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:05.947 [2024-07-23 18:36:05.991698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:05.947 [2024-07-23 18:36:05.991711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:05.947 [2024-07-23 18:36:05.991719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:05.947 [2024-07-23 18:36:05.991728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:05.947 [2024-07-23 18:36:05.991736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:05.947 [2024-07-23 18:36:05.991746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:05.947 [2024-07-23 18:36:05.991755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:05.947 [2024-07-23 18:36:05.991765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:05.947 [2024-07-23 18:36:05.991772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:05.947 [2024-07-23 18:36:05.991781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:05.947 [2024-07-23 18:36:05.991813] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:05.947 [2024-07-23 18:36:05.991825] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fca981c7-c912-4932-b7c1-7f2aa50083fe 00:18:05.947 [2024-07-23 18:36:05.991845] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:05.947 [2024-07-23 18:36:05.991856] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:05.947 [2024-07-23 18:36:05.991863] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:05.947 [2024-07-23 18:36:05.991874] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:05.947 [2024-07-23 18:36:05.991882] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:05.947 [2024-07-23 18:36:05.991896] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:05.947 [2024-07-23 18:36:05.991904] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:05.947 [2024-07-23 18:36:05.991914] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:05.947 [2024-07-23 18:36:05.991921] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:05.947 [2024-07-23 18:36:05.991931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.947 [2024-07-23 18:36:05.991944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:05.947 [2024-07-23 18:36:05.991958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.167 ms 00:18:05.947 [2024-07-23 18:36:05.991966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.947 [2024-07-23 18:36:05.994935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.947 [2024-07-23 18:36:05.994954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:05.947 [2024-07-23 18:36:05.994966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.936 ms 00:18:05.947 [2024-07-23 18:36:05.994973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.947 [2024-07-23 18:36:05.995146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.947 [2024-07-23 18:36:05.995158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:05.947 [2024-07-23 18:36:05.995185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.154 ms 00:18:05.947 [2024-07-23 18:36:05.995193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.207 [2024-07-23 18:36:06.004832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:06.207 [2024-07-23 18:36:06.004859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:06.207 [2024-07-23 18:36:06.004871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:06.207 [2024-07-23 18:36:06.004887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.207 [2024-07-23 18:36:06.004945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:06.207 [2024-07-23 18:36:06.004956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:06.207 [2024-07-23 18:36:06.004966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:06.207 [2024-07-23 18:36:06.004973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.207 [2024-07-23 18:36:06.005045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:06.207 [2024-07-23 18:36:06.005055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:06.207 [2024-07-23 18:36:06.005065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:06.207 [2024-07-23 18:36:06.005073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.207 [2024-07-23 18:36:06.005096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:06.207 [2024-07-23 18:36:06.005105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:06.207 [2024-07-23 18:36:06.005118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:06.207 [2024-07-23 18:36:06.005125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.207 [2024-07-23 18:36:06.028035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:06.207 [2024-07-23 18:36:06.028084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:06.207 [2024-07-23 18:36:06.028098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:06.207 [2024-07-23 18:36:06.028106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.207 [2024-07-23 18:36:06.041172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:06.207 [2024-07-23 18:36:06.041205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:06.207 [2024-07-23 18:36:06.041241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:06.207 [2024-07-23 18:36:06.041248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.207 [2024-07-23 18:36:06.041329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:06.208 [2024-07-23 18:36:06.041338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:06.208 [2024-07-23 18:36:06.041349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:06.208 [2024-07-23 18:36:06.041356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.208 [2024-07-23 18:36:06.041398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:06.208 [2024-07-23 18:36:06.041407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:06.208 [2024-07-23 18:36:06.041417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:06.208 [2024-07-23 18:36:06.041427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.208 [2024-07-23 18:36:06.041516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:06.208 [2024-07-23 18:36:06.041527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:06.208 [2024-07-23 18:36:06.041538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:06.208 [2024-07-23 18:36:06.041545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.208 [2024-07-23 18:36:06.041599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:06.208 [2024-07-23 18:36:06.041610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:06.208 [2024-07-23 18:36:06.041620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:06.208 [2024-07-23 18:36:06.041627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.208 [2024-07-23 18:36:06.041676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:06.208 [2024-07-23 18:36:06.041685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:06.208 [2024-07-23 18:36:06.041694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:06.208 [2024-07-23 18:36:06.041702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.208 [2024-07-23 18:36:06.041750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:06.208 [2024-07-23 18:36:06.041758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:06.208 [2024-07-23 18:36:06.041769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:06.208 [2024-07-23 18:36:06.041790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.208 [2024-07-23 18:36:06.041939] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 286.971 ms, result 0 00:18:06.208 true 00:18:06.208 18:36:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # killprocess 89204 00:18:06.208 18:36:06 ftl.ftl_bdevperf -- common/autotest_common.sh@946 -- # '[' -z 89204 ']' 00:18:06.208 18:36:06 ftl.ftl_bdevperf -- common/autotest_common.sh@950 -- # kill -0 89204 00:18:06.208 18:36:06 ftl.ftl_bdevperf -- common/autotest_common.sh@951 -- # uname 00:18:06.208 18:36:06 ftl.ftl_bdevperf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:06.208 18:36:06 ftl.ftl_bdevperf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 89204 00:18:06.208 killing process with pid 89204 00:18:06.208 Received shutdown signal, test time was about 4.000000 seconds 00:18:06.208 00:18:06.208 Latency(us) 00:18:06.208 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:06.208 =================================================================================================================== 00:18:06.208 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:06.208 18:36:06 ftl.ftl_bdevperf -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:06.208 18:36:06 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:06.208 18:36:06 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 89204' 00:18:06.208 18:36:06 ftl.ftl_bdevperf -- common/autotest_common.sh@965 -- # kill 89204 00:18:06.208 18:36:06 ftl.ftl_bdevperf -- common/autotest_common.sh@970 -- # wait 89204 00:18:08.114 18:36:07 ftl.ftl_bdevperf -- ftl/bdevperf.sh@38 -- # trap - SIGINT SIGTERM EXIT 00:18:08.114 18:36:07 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # timing_exit '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:18:08.114 18:36:07 ftl.ftl_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:08.114 18:36:07 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:08.114 Remove shared memory files 00:18:08.114 18:36:07 ftl.ftl_bdevperf -- ftl/bdevperf.sh@41 -- # remove_shm 00:18:08.114 18:36:07 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:18:08.114 18:36:07 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:18:08.114 18:36:07 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:18:08.114 18:36:08 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:18:08.114 18:36:08 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:18:08.114 18:36:08 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:18:08.114 ************************************ 00:18:08.114 END TEST ftl_bdevperf 00:18:08.114 ************************************ 00:18:08.114 00:18:08.114 real 0m22.606s 00:18:08.114 user 0m24.841s 00:18:08.114 sys 0m1.164s 00:18:08.114 18:36:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:08.114 18:36:08 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:08.114 18:36:08 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:18:08.114 18:36:08 ftl -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:18:08.114 18:36:08 ftl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:08.114 18:36:08 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:08.114 ************************************ 00:18:08.114 START TEST ftl_trim 00:18:08.114 ************************************ 00:18:08.114 18:36:08 ftl.ftl_trim -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:18:08.114 * Looking for test storage... 00:18:08.114 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:08.114 18:36:08 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:08.114 18:36:08 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:18:08.114 18:36:08 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:08.114 18:36:08 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:08.114 18:36:08 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:08.114 18:36:08 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:08.114 18:36:08 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:08.114 18:36:08 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:08.114 18:36:08 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:08.114 18:36:08 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:08.114 18:36:08 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:08.114 18:36:08 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:08.114 18:36:08 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:08.114 18:36:08 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:08.114 18:36:08 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:08.114 18:36:08 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:08.114 18:36:08 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:08.114 18:36:08 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:08.374 18:36:08 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:08.374 18:36:08 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:08.374 18:36:08 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:08.374 18:36:08 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:08.374 18:36:08 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:08.374 18:36:08 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:08.374 18:36:08 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:08.374 18:36:08 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:08.374 18:36:08 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:08.374 18:36:08 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:08.374 18:36:08 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:08.374 18:36:08 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:08.374 18:36:08 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:18:08.374 18:36:08 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:18:08.374 18:36:08 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:18:08.374 18:36:08 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:18:08.374 18:36:08 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:18:08.374 18:36:08 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:18:08.374 18:36:08 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:18:08.374 18:36:08 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:18:08.374 18:36:08 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:08.374 18:36:08 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:08.374 18:36:08 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:18:08.374 18:36:08 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=89563 00:18:08.374 18:36:08 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:18:08.374 18:36:08 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 89563 00:18:08.374 18:36:08 ftl.ftl_trim -- common/autotest_common.sh@827 -- # '[' -z 89563 ']' 00:18:08.374 18:36:08 ftl.ftl_trim -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:08.374 18:36:08 ftl.ftl_trim -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:08.374 18:36:08 ftl.ftl_trim -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:08.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:08.374 18:36:08 ftl.ftl_trim -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:08.374 18:36:08 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:18:08.374 [2024-07-23 18:36:08.250303] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:18:08.374 [2024-07-23 18:36:08.250507] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89563 ] 00:18:08.374 [2024-07-23 18:36:08.399531] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:08.634 [2024-07-23 18:36:08.471735] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:08.634 [2024-07-23 18:36:08.471874] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:08.634 [2024-07-23 18:36:08.471823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:09.203 18:36:09 ftl.ftl_trim -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:09.203 18:36:09 ftl.ftl_trim -- common/autotest_common.sh@860 -- # return 0 00:18:09.203 18:36:09 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:09.203 18:36:09 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:18:09.203 18:36:09 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:09.203 18:36:09 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:18:09.203 18:36:09 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:18:09.203 18:36:09 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:09.463 18:36:09 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:09.463 18:36:09 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:18:09.463 18:36:09 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:09.463 18:36:09 ftl.ftl_trim -- common/autotest_common.sh@1374 -- # local bdev_name=nvme0n1 00:18:09.463 18:36:09 ftl.ftl_trim -- common/autotest_common.sh@1375 -- # local bdev_info 00:18:09.463 18:36:09 ftl.ftl_trim -- common/autotest_common.sh@1376 -- # local bs 00:18:09.463 18:36:09 ftl.ftl_trim -- common/autotest_common.sh@1377 -- # local nb 00:18:09.463 18:36:09 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:09.723 18:36:09 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:18:09.723 { 00:18:09.723 "name": "nvme0n1", 00:18:09.723 "aliases": [ 00:18:09.723 "a54f7dda-5b92-4c1a-aeb1-a26d5593834a" 00:18:09.723 ], 00:18:09.723 "product_name": "NVMe disk", 00:18:09.723 "block_size": 4096, 00:18:09.723 "num_blocks": 1310720, 00:18:09.723 "uuid": "a54f7dda-5b92-4c1a-aeb1-a26d5593834a", 00:18:09.723 "assigned_rate_limits": { 00:18:09.723 "rw_ios_per_sec": 0, 00:18:09.723 "rw_mbytes_per_sec": 0, 00:18:09.723 "r_mbytes_per_sec": 0, 00:18:09.723 "w_mbytes_per_sec": 0 00:18:09.723 }, 00:18:09.723 "claimed": true, 00:18:09.723 "claim_type": "read_many_write_one", 00:18:09.723 "zoned": false, 00:18:09.723 "supported_io_types": { 00:18:09.723 "read": true, 00:18:09.723 "write": true, 00:18:09.723 "unmap": true, 00:18:09.723 "write_zeroes": true, 00:18:09.723 "flush": true, 00:18:09.723 "reset": true, 00:18:09.723 "compare": true, 00:18:09.723 "compare_and_write": false, 00:18:09.723 "abort": true, 00:18:09.723 "nvme_admin": true, 00:18:09.723 "nvme_io": true 00:18:09.723 }, 00:18:09.723 "driver_specific": { 00:18:09.723 "nvme": [ 00:18:09.723 { 00:18:09.723 "pci_address": "0000:00:11.0", 00:18:09.723 "trid": { 00:18:09.723 "trtype": "PCIe", 00:18:09.723 "traddr": "0000:00:11.0" 00:18:09.723 }, 00:18:09.723 "ctrlr_data": { 00:18:09.723 "cntlid": 0, 00:18:09.723 "vendor_id": "0x1b36", 00:18:09.723 "model_number": "QEMU NVMe Ctrl", 00:18:09.723 "serial_number": "12341", 00:18:09.723 "firmware_revision": "8.0.0", 00:18:09.723 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:09.723 "oacs": { 00:18:09.723 "security": 0, 00:18:09.723 "format": 1, 00:18:09.723 "firmware": 0, 00:18:09.723 "ns_manage": 1 00:18:09.723 }, 00:18:09.723 "multi_ctrlr": false, 00:18:09.723 "ana_reporting": false 00:18:09.723 }, 00:18:09.723 "vs": { 00:18:09.723 "nvme_version": "1.4" 00:18:09.723 }, 00:18:09.723 "ns_data": { 00:18:09.723 "id": 1, 00:18:09.723 "can_share": false 00:18:09.723 } 00:18:09.723 } 00:18:09.723 ], 00:18:09.723 "mp_policy": "active_passive" 00:18:09.723 } 00:18:09.723 } 00:18:09.723 ]' 00:18:09.723 18:36:09 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:18:09.723 18:36:09 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # bs=4096 00:18:09.723 18:36:09 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:18:09.723 18:36:09 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # nb=1310720 00:18:09.723 18:36:09 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bdev_size=5120 00:18:09.723 18:36:09 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # echo 5120 00:18:09.723 18:36:09 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:18:09.723 18:36:09 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:09.723 18:36:09 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:18:09.723 18:36:09 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:09.723 18:36:09 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:09.982 18:36:09 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=22e22162-3164-459c-952a-57b0f22aa5e0 00:18:09.982 18:36:09 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:18:09.983 18:36:09 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 22e22162-3164-459c-952a-57b0f22aa5e0 00:18:10.242 18:36:10 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:10.242 18:36:10 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=5ee1391b-e2cc-4fc0-be38-5e3160aed5c2 00:18:10.242 18:36:10 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 5ee1391b-e2cc-4fc0-be38-5e3160aed5c2 00:18:10.502 18:36:10 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=2cdb2f17-8677-45e5-9d72-57a64b2dcf33 00:18:10.502 18:36:10 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 2cdb2f17-8677-45e5-9d72-57a64b2dcf33 00:18:10.502 18:36:10 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:18:10.502 18:36:10 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:10.502 18:36:10 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=2cdb2f17-8677-45e5-9d72-57a64b2dcf33 00:18:10.502 18:36:10 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:18:10.502 18:36:10 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 2cdb2f17-8677-45e5-9d72-57a64b2dcf33 00:18:10.502 18:36:10 ftl.ftl_trim -- common/autotest_common.sh@1374 -- # local bdev_name=2cdb2f17-8677-45e5-9d72-57a64b2dcf33 00:18:10.502 18:36:10 ftl.ftl_trim -- common/autotest_common.sh@1375 -- # local bdev_info 00:18:10.502 18:36:10 ftl.ftl_trim -- common/autotest_common.sh@1376 -- # local bs 00:18:10.502 18:36:10 ftl.ftl_trim -- common/autotest_common.sh@1377 -- # local nb 00:18:10.502 18:36:10 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2cdb2f17-8677-45e5-9d72-57a64b2dcf33 00:18:10.762 18:36:10 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:18:10.762 { 00:18:10.762 "name": "2cdb2f17-8677-45e5-9d72-57a64b2dcf33", 00:18:10.762 "aliases": [ 00:18:10.762 "lvs/nvme0n1p0" 00:18:10.762 ], 00:18:10.762 "product_name": "Logical Volume", 00:18:10.762 "block_size": 4096, 00:18:10.762 "num_blocks": 26476544, 00:18:10.762 "uuid": "2cdb2f17-8677-45e5-9d72-57a64b2dcf33", 00:18:10.762 "assigned_rate_limits": { 00:18:10.762 "rw_ios_per_sec": 0, 00:18:10.762 "rw_mbytes_per_sec": 0, 00:18:10.762 "r_mbytes_per_sec": 0, 00:18:10.762 "w_mbytes_per_sec": 0 00:18:10.762 }, 00:18:10.762 "claimed": false, 00:18:10.762 "zoned": false, 00:18:10.762 "supported_io_types": { 00:18:10.762 "read": true, 00:18:10.762 "write": true, 00:18:10.762 "unmap": true, 00:18:10.762 "write_zeroes": true, 00:18:10.762 "flush": false, 00:18:10.762 "reset": true, 00:18:10.762 "compare": false, 00:18:10.762 "compare_and_write": false, 00:18:10.762 "abort": false, 00:18:10.762 "nvme_admin": false, 00:18:10.762 "nvme_io": false 00:18:10.762 }, 00:18:10.762 "driver_specific": { 00:18:10.762 "lvol": { 00:18:10.762 "lvol_store_uuid": "5ee1391b-e2cc-4fc0-be38-5e3160aed5c2", 00:18:10.762 "base_bdev": "nvme0n1", 00:18:10.762 "thin_provision": true, 00:18:10.762 "num_allocated_clusters": 0, 00:18:10.762 "snapshot": false, 00:18:10.763 "clone": false, 00:18:10.763 "esnap_clone": false 00:18:10.763 } 00:18:10.763 } 00:18:10.763 } 00:18:10.763 ]' 00:18:10.763 18:36:10 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:18:10.763 18:36:10 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # bs=4096 00:18:10.763 18:36:10 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:18:10.763 18:36:10 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # nb=26476544 00:18:10.763 18:36:10 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:18:10.763 18:36:10 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # echo 103424 00:18:10.763 18:36:10 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:18:10.763 18:36:10 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:18:10.763 18:36:10 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:11.022 18:36:10 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:11.022 18:36:10 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:11.022 18:36:10 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 2cdb2f17-8677-45e5-9d72-57a64b2dcf33 00:18:11.022 18:36:10 ftl.ftl_trim -- common/autotest_common.sh@1374 -- # local bdev_name=2cdb2f17-8677-45e5-9d72-57a64b2dcf33 00:18:11.022 18:36:10 ftl.ftl_trim -- common/autotest_common.sh@1375 -- # local bdev_info 00:18:11.022 18:36:10 ftl.ftl_trim -- common/autotest_common.sh@1376 -- # local bs 00:18:11.022 18:36:10 ftl.ftl_trim -- common/autotest_common.sh@1377 -- # local nb 00:18:11.022 18:36:10 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2cdb2f17-8677-45e5-9d72-57a64b2dcf33 00:18:11.282 18:36:11 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:18:11.282 { 00:18:11.282 "name": "2cdb2f17-8677-45e5-9d72-57a64b2dcf33", 00:18:11.282 "aliases": [ 00:18:11.282 "lvs/nvme0n1p0" 00:18:11.282 ], 00:18:11.282 "product_name": "Logical Volume", 00:18:11.282 "block_size": 4096, 00:18:11.282 "num_blocks": 26476544, 00:18:11.282 "uuid": "2cdb2f17-8677-45e5-9d72-57a64b2dcf33", 00:18:11.282 "assigned_rate_limits": { 00:18:11.282 "rw_ios_per_sec": 0, 00:18:11.282 "rw_mbytes_per_sec": 0, 00:18:11.282 "r_mbytes_per_sec": 0, 00:18:11.282 "w_mbytes_per_sec": 0 00:18:11.282 }, 00:18:11.282 "claimed": false, 00:18:11.282 "zoned": false, 00:18:11.282 "supported_io_types": { 00:18:11.282 "read": true, 00:18:11.282 "write": true, 00:18:11.282 "unmap": true, 00:18:11.282 "write_zeroes": true, 00:18:11.282 "flush": false, 00:18:11.282 "reset": true, 00:18:11.282 "compare": false, 00:18:11.282 "compare_and_write": false, 00:18:11.282 "abort": false, 00:18:11.282 "nvme_admin": false, 00:18:11.282 "nvme_io": false 00:18:11.282 }, 00:18:11.282 "driver_specific": { 00:18:11.282 "lvol": { 00:18:11.282 "lvol_store_uuid": "5ee1391b-e2cc-4fc0-be38-5e3160aed5c2", 00:18:11.282 "base_bdev": "nvme0n1", 00:18:11.283 "thin_provision": true, 00:18:11.283 "num_allocated_clusters": 0, 00:18:11.283 "snapshot": false, 00:18:11.283 "clone": false, 00:18:11.283 "esnap_clone": false 00:18:11.283 } 00:18:11.283 } 00:18:11.283 } 00:18:11.283 ]' 00:18:11.283 18:36:11 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:18:11.283 18:36:11 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # bs=4096 00:18:11.283 18:36:11 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:18:11.283 18:36:11 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # nb=26476544 00:18:11.283 18:36:11 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:18:11.283 18:36:11 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # echo 103424 00:18:11.283 18:36:11 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:18:11.283 18:36:11 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:11.543 18:36:11 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:18:11.543 18:36:11 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:18:11.543 18:36:11 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 2cdb2f17-8677-45e5-9d72-57a64b2dcf33 00:18:11.543 18:36:11 ftl.ftl_trim -- common/autotest_common.sh@1374 -- # local bdev_name=2cdb2f17-8677-45e5-9d72-57a64b2dcf33 00:18:11.543 18:36:11 ftl.ftl_trim -- common/autotest_common.sh@1375 -- # local bdev_info 00:18:11.543 18:36:11 ftl.ftl_trim -- common/autotest_common.sh@1376 -- # local bs 00:18:11.543 18:36:11 ftl.ftl_trim -- common/autotest_common.sh@1377 -- # local nb 00:18:11.543 18:36:11 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2cdb2f17-8677-45e5-9d72-57a64b2dcf33 00:18:11.543 18:36:11 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:18:11.543 { 00:18:11.543 "name": "2cdb2f17-8677-45e5-9d72-57a64b2dcf33", 00:18:11.543 "aliases": [ 00:18:11.543 "lvs/nvme0n1p0" 00:18:11.543 ], 00:18:11.543 "product_name": "Logical Volume", 00:18:11.543 "block_size": 4096, 00:18:11.543 "num_blocks": 26476544, 00:18:11.543 "uuid": "2cdb2f17-8677-45e5-9d72-57a64b2dcf33", 00:18:11.543 "assigned_rate_limits": { 00:18:11.543 "rw_ios_per_sec": 0, 00:18:11.543 "rw_mbytes_per_sec": 0, 00:18:11.543 "r_mbytes_per_sec": 0, 00:18:11.543 "w_mbytes_per_sec": 0 00:18:11.543 }, 00:18:11.543 "claimed": false, 00:18:11.543 "zoned": false, 00:18:11.543 "supported_io_types": { 00:18:11.543 "read": true, 00:18:11.543 "write": true, 00:18:11.543 "unmap": true, 00:18:11.543 "write_zeroes": true, 00:18:11.543 "flush": false, 00:18:11.543 "reset": true, 00:18:11.543 "compare": false, 00:18:11.543 "compare_and_write": false, 00:18:11.543 "abort": false, 00:18:11.543 "nvme_admin": false, 00:18:11.543 "nvme_io": false 00:18:11.543 }, 00:18:11.543 "driver_specific": { 00:18:11.543 "lvol": { 00:18:11.543 "lvol_store_uuid": "5ee1391b-e2cc-4fc0-be38-5e3160aed5c2", 00:18:11.543 "base_bdev": "nvme0n1", 00:18:11.543 "thin_provision": true, 00:18:11.543 "num_allocated_clusters": 0, 00:18:11.543 "snapshot": false, 00:18:11.543 "clone": false, 00:18:11.543 "esnap_clone": false 00:18:11.543 } 00:18:11.543 } 00:18:11.543 } 00:18:11.543 ]' 00:18:11.543 18:36:11 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:18:11.804 18:36:11 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # bs=4096 00:18:11.804 18:36:11 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:18:11.804 18:36:11 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # nb=26476544 00:18:11.804 18:36:11 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:18:11.804 18:36:11 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # echo 103424 00:18:11.804 18:36:11 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:18:11.804 18:36:11 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 2cdb2f17-8677-45e5-9d72-57a64b2dcf33 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:18:11.804 [2024-07-23 18:36:11.792886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.804 [2024-07-23 18:36:11.792985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:11.804 [2024-07-23 18:36:11.793024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:11.804 [2024-07-23 18:36:11.793045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.804 [2024-07-23 18:36:11.795678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.804 [2024-07-23 18:36:11.795715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:11.804 [2024-07-23 18:36:11.795728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.586 ms 00:18:11.804 [2024-07-23 18:36:11.795736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.804 [2024-07-23 18:36:11.795877] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:11.804 [2024-07-23 18:36:11.796109] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:11.804 [2024-07-23 18:36:11.796126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.804 [2024-07-23 18:36:11.796134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:11.804 [2024-07-23 18:36:11.796145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:18:11.804 [2024-07-23 18:36:11.796156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.804 [2024-07-23 18:36:11.796260] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID af89930e-19ee-46e5-a594-fa546b86b4cb 00:18:11.804 [2024-07-23 18:36:11.798768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.804 [2024-07-23 18:36:11.798831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:11.804 [2024-07-23 18:36:11.798859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:18:11.804 [2024-07-23 18:36:11.798881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.804 [2024-07-23 18:36:11.813245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.804 [2024-07-23 18:36:11.813323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:11.804 [2024-07-23 18:36:11.813356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.276 ms 00:18:11.804 [2024-07-23 18:36:11.813380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.804 [2024-07-23 18:36:11.813604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.804 [2024-07-23 18:36:11.813654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:11.804 [2024-07-23 18:36:11.813684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:18:11.804 [2024-07-23 18:36:11.813706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.804 [2024-07-23 18:36:11.813818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.804 [2024-07-23 18:36:11.813855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:11.804 [2024-07-23 18:36:11.813882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:18:11.804 [2024-07-23 18:36:11.813906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.804 [2024-07-23 18:36:11.813981] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:11.804 [2024-07-23 18:36:11.816794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.804 [2024-07-23 18:36:11.816848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:11.804 [2024-07-23 18:36:11.816881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.824 ms 00:18:11.804 [2024-07-23 18:36:11.816903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.804 [2024-07-23 18:36:11.817000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.804 [2024-07-23 18:36:11.817031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:11.804 [2024-07-23 18:36:11.817060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:11.804 [2024-07-23 18:36:11.817078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.804 [2024-07-23 18:36:11.817188] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:11.804 [2024-07-23 18:36:11.817357] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:11.804 [2024-07-23 18:36:11.817407] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:11.804 [2024-07-23 18:36:11.817446] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:18:11.804 [2024-07-23 18:36:11.817526] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:11.804 [2024-07-23 18:36:11.817577] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:11.804 [2024-07-23 18:36:11.817616] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:11.804 [2024-07-23 18:36:11.817675] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:11.804 [2024-07-23 18:36:11.817701] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:11.804 [2024-07-23 18:36:11.817723] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:11.804 [2024-07-23 18:36:11.817761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.804 [2024-07-23 18:36:11.817788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:11.804 [2024-07-23 18:36:11.817820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.579 ms 00:18:11.804 [2024-07-23 18:36:11.817840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.804 [2024-07-23 18:36:11.817948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.804 [2024-07-23 18:36:11.817957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:11.804 [2024-07-23 18:36:11.817970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:18:11.804 [2024-07-23 18:36:11.817977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.804 [2024-07-23 18:36:11.818095] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:11.804 [2024-07-23 18:36:11.818105] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:11.804 [2024-07-23 18:36:11.818118] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:11.804 [2024-07-23 18:36:11.818126] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:11.804 [2024-07-23 18:36:11.818138] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:11.804 [2024-07-23 18:36:11.818144] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:11.804 [2024-07-23 18:36:11.818153] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:11.804 [2024-07-23 18:36:11.818161] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:11.804 [2024-07-23 18:36:11.818169] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:11.804 [2024-07-23 18:36:11.818176] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:11.804 [2024-07-23 18:36:11.818185] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:11.804 [2024-07-23 18:36:11.818191] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:11.804 [2024-07-23 18:36:11.818199] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:11.804 [2024-07-23 18:36:11.818205] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:11.804 [2024-07-23 18:36:11.818216] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:11.804 [2024-07-23 18:36:11.818222] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:11.804 [2024-07-23 18:36:11.818230] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:11.805 [2024-07-23 18:36:11.818236] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:11.805 [2024-07-23 18:36:11.818244] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:11.805 [2024-07-23 18:36:11.818252] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:11.805 [2024-07-23 18:36:11.818261] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:11.805 [2024-07-23 18:36:11.818267] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:11.805 [2024-07-23 18:36:11.818275] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:11.805 [2024-07-23 18:36:11.818281] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:11.805 [2024-07-23 18:36:11.818289] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:11.805 [2024-07-23 18:36:11.818295] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:11.805 [2024-07-23 18:36:11.818304] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:11.805 [2024-07-23 18:36:11.818310] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:11.805 [2024-07-23 18:36:11.818319] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:11.805 [2024-07-23 18:36:11.818325] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:11.805 [2024-07-23 18:36:11.818337] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:11.805 [2024-07-23 18:36:11.818343] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:11.805 [2024-07-23 18:36:11.818351] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:11.805 [2024-07-23 18:36:11.818358] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:11.805 [2024-07-23 18:36:11.818366] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:11.805 [2024-07-23 18:36:11.818393] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:11.805 [2024-07-23 18:36:11.818403] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:11.805 [2024-07-23 18:36:11.818409] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:11.805 [2024-07-23 18:36:11.818418] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:11.805 [2024-07-23 18:36:11.818424] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:11.805 [2024-07-23 18:36:11.818432] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:11.805 [2024-07-23 18:36:11.818439] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:11.805 [2024-07-23 18:36:11.818447] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:11.805 [2024-07-23 18:36:11.818454] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:11.805 [2024-07-23 18:36:11.818464] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:11.805 [2024-07-23 18:36:11.818472] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:11.805 [2024-07-23 18:36:11.818485] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:11.805 [2024-07-23 18:36:11.818492] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:11.805 [2024-07-23 18:36:11.818501] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:11.805 [2024-07-23 18:36:11.818507] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:11.805 [2024-07-23 18:36:11.818516] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:11.805 [2024-07-23 18:36:11.818523] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:11.805 [2024-07-23 18:36:11.818532] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:11.805 [2024-07-23 18:36:11.818544] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:11.805 [2024-07-23 18:36:11.818558] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:11.805 [2024-07-23 18:36:11.818582] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:11.805 [2024-07-23 18:36:11.818593] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:11.805 [2024-07-23 18:36:11.818600] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:11.805 [2024-07-23 18:36:11.818611] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:11.805 [2024-07-23 18:36:11.818618] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:11.805 [2024-07-23 18:36:11.818627] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:11.805 [2024-07-23 18:36:11.818634] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:11.805 [2024-07-23 18:36:11.818645] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:11.805 [2024-07-23 18:36:11.818651] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:11.805 [2024-07-23 18:36:11.818661] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:11.805 [2024-07-23 18:36:11.818668] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:11.805 [2024-07-23 18:36:11.818676] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:11.805 [2024-07-23 18:36:11.818683] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:11.805 [2024-07-23 18:36:11.818692] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:11.805 [2024-07-23 18:36:11.818699] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:11.805 [2024-07-23 18:36:11.818723] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:11.805 [2024-07-23 18:36:11.818734] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:11.805 [2024-07-23 18:36:11.818744] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:11.805 [2024-07-23 18:36:11.818751] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:11.805 [2024-07-23 18:36:11.818760] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:11.805 [2024-07-23 18:36:11.818769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.805 [2024-07-23 18:36:11.818779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:11.805 [2024-07-23 18:36:11.818809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.726 ms 00:18:11.805 [2024-07-23 18:36:11.818822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.805 [2024-07-23 18:36:11.818957] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:11.805 [2024-07-23 18:36:11.818983] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:15.999 [2024-07-23 18:36:15.337052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.999 [2024-07-23 18:36:15.337124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:15.999 [2024-07-23 18:36:15.337142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3524.873 ms 00:18:15.999 [2024-07-23 18:36:15.337158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.999 [2024-07-23 18:36:15.348640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.999 [2024-07-23 18:36:15.348702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:15.999 [2024-07-23 18:36:15.348718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.387 ms 00:18:15.999 [2024-07-23 18:36:15.348730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.000 [2024-07-23 18:36:15.348907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.000 [2024-07-23 18:36:15.348927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:16.000 [2024-07-23 18:36:15.348938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:18:16.000 [2024-07-23 18:36:15.348954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.000 [2024-07-23 18:36:15.365528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.000 [2024-07-23 18:36:15.365605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:16.000 [2024-07-23 18:36:15.365621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.567 ms 00:18:16.000 [2024-07-23 18:36:15.365640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.000 [2024-07-23 18:36:15.365743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.000 [2024-07-23 18:36:15.365770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:16.000 [2024-07-23 18:36:15.365786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:16.000 [2024-07-23 18:36:15.365806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.000 [2024-07-23 18:36:15.366281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.000 [2024-07-23 18:36:15.366308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:16.000 [2024-07-23 18:36:15.366322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.437 ms 00:18:16.000 [2024-07-23 18:36:15.366333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.000 [2024-07-23 18:36:15.366477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.000 [2024-07-23 18:36:15.366507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:16.000 [2024-07-23 18:36:15.366518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:18:16.000 [2024-07-23 18:36:15.366530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.000 [2024-07-23 18:36:15.373922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.000 [2024-07-23 18:36:15.373973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:16.000 [2024-07-23 18:36:15.373987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.331 ms 00:18:16.000 [2024-07-23 18:36:15.374000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.000 [2024-07-23 18:36:15.381960] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:16.000 [2024-07-23 18:36:15.398615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.000 [2024-07-23 18:36:15.398680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:16.000 [2024-07-23 18:36:15.398699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.515 ms 00:18:16.000 [2024-07-23 18:36:15.398708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.000 [2024-07-23 18:36:15.490649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.000 [2024-07-23 18:36:15.490710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:16.000 [2024-07-23 18:36:15.490730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 91.984 ms 00:18:16.000 [2024-07-23 18:36:15.490740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.000 [2024-07-23 18:36:15.490959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.000 [2024-07-23 18:36:15.490974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:16.000 [2024-07-23 18:36:15.490988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.160 ms 00:18:16.000 [2024-07-23 18:36:15.490999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.000 [2024-07-23 18:36:15.495274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.000 [2024-07-23 18:36:15.495320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:16.000 [2024-07-23 18:36:15.495338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.225 ms 00:18:16.000 [2024-07-23 18:36:15.495348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.000 [2024-07-23 18:36:15.498498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.000 [2024-07-23 18:36:15.498534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:16.000 [2024-07-23 18:36:15.498550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.082 ms 00:18:16.000 [2024-07-23 18:36:15.498560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.000 [2024-07-23 18:36:15.498877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.000 [2024-07-23 18:36:15.498895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:16.000 [2024-07-23 18:36:15.498949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.250 ms 00:18:16.000 [2024-07-23 18:36:15.498962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.000 [2024-07-23 18:36:15.548147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.000 [2024-07-23 18:36:15.548200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:16.000 [2024-07-23 18:36:15.548217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.203 ms 00:18:16.000 [2024-07-23 18:36:15.548229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.000 [2024-07-23 18:36:15.553277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.000 [2024-07-23 18:36:15.553327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:16.000 [2024-07-23 18:36:15.553343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.997 ms 00:18:16.000 [2024-07-23 18:36:15.553354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.000 [2024-07-23 18:36:15.557255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.000 [2024-07-23 18:36:15.557297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:16.000 [2024-07-23 18:36:15.557313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.849 ms 00:18:16.000 [2024-07-23 18:36:15.557322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.000 [2024-07-23 18:36:15.561275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.000 [2024-07-23 18:36:15.561313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:16.000 [2024-07-23 18:36:15.561327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.904 ms 00:18:16.000 [2024-07-23 18:36:15.561337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.000 [2024-07-23 18:36:15.561397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.000 [2024-07-23 18:36:15.561410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:16.000 [2024-07-23 18:36:15.561423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:16.000 [2024-07-23 18:36:15.561432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.000 [2024-07-23 18:36:15.561521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.000 [2024-07-23 18:36:15.561532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:16.000 [2024-07-23 18:36:15.561545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:18:16.000 [2024-07-23 18:36:15.561554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.000 [2024-07-23 18:36:15.562757] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:16.000 [2024-07-23 18:36:15.564004] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3776.788 ms, result 0 00:18:16.000 [2024-07-23 18:36:15.565071] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:16.000 { 00:18:16.000 "name": "ftl0", 00:18:16.000 "uuid": "af89930e-19ee-46e5-a594-fa546b86b4cb" 00:18:16.000 } 00:18:16.000 18:36:15 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:18:16.000 18:36:15 ftl.ftl_trim -- common/autotest_common.sh@895 -- # local bdev_name=ftl0 00:18:16.000 18:36:15 ftl.ftl_trim -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:16.000 18:36:15 ftl.ftl_trim -- common/autotest_common.sh@897 -- # local i 00:18:16.000 18:36:15 ftl.ftl_trim -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:16.000 18:36:15 ftl.ftl_trim -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:16.000 18:36:15 ftl.ftl_trim -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:16.000 18:36:15 ftl.ftl_trim -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:18:16.000 [ 00:18:16.000 { 00:18:16.000 "name": "ftl0", 00:18:16.000 "aliases": [ 00:18:16.000 "af89930e-19ee-46e5-a594-fa546b86b4cb" 00:18:16.000 ], 00:18:16.000 "product_name": "FTL disk", 00:18:16.000 "block_size": 4096, 00:18:16.000 "num_blocks": 23592960, 00:18:16.000 "uuid": "af89930e-19ee-46e5-a594-fa546b86b4cb", 00:18:16.000 "assigned_rate_limits": { 00:18:16.000 "rw_ios_per_sec": 0, 00:18:16.000 "rw_mbytes_per_sec": 0, 00:18:16.000 "r_mbytes_per_sec": 0, 00:18:16.000 "w_mbytes_per_sec": 0 00:18:16.000 }, 00:18:16.000 "claimed": false, 00:18:16.000 "zoned": false, 00:18:16.000 "supported_io_types": { 00:18:16.000 "read": true, 00:18:16.000 "write": true, 00:18:16.000 "unmap": true, 00:18:16.000 "write_zeroes": true, 00:18:16.000 "flush": true, 00:18:16.000 "reset": false, 00:18:16.000 "compare": false, 00:18:16.000 "compare_and_write": false, 00:18:16.000 "abort": false, 00:18:16.000 "nvme_admin": false, 00:18:16.000 "nvme_io": false 00:18:16.000 }, 00:18:16.000 "driver_specific": { 00:18:16.000 "ftl": { 00:18:16.000 "base_bdev": "2cdb2f17-8677-45e5-9d72-57a64b2dcf33", 00:18:16.000 "cache": "nvc0n1p0" 00:18:16.000 } 00:18:16.000 } 00:18:16.000 } 00:18:16.000 ] 00:18:16.000 18:36:15 ftl.ftl_trim -- common/autotest_common.sh@903 -- # return 0 00:18:16.000 18:36:15 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:18:16.000 18:36:15 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:16.261 18:36:16 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:18:16.261 18:36:16 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:18:16.261 18:36:16 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:18:16.261 { 00:18:16.261 "name": "ftl0", 00:18:16.261 "aliases": [ 00:18:16.261 "af89930e-19ee-46e5-a594-fa546b86b4cb" 00:18:16.261 ], 00:18:16.261 "product_name": "FTL disk", 00:18:16.261 "block_size": 4096, 00:18:16.261 "num_blocks": 23592960, 00:18:16.261 "uuid": "af89930e-19ee-46e5-a594-fa546b86b4cb", 00:18:16.261 "assigned_rate_limits": { 00:18:16.261 "rw_ios_per_sec": 0, 00:18:16.261 "rw_mbytes_per_sec": 0, 00:18:16.261 "r_mbytes_per_sec": 0, 00:18:16.261 "w_mbytes_per_sec": 0 00:18:16.261 }, 00:18:16.261 "claimed": false, 00:18:16.261 "zoned": false, 00:18:16.261 "supported_io_types": { 00:18:16.261 "read": true, 00:18:16.261 "write": true, 00:18:16.261 "unmap": true, 00:18:16.261 "write_zeroes": true, 00:18:16.261 "flush": true, 00:18:16.261 "reset": false, 00:18:16.261 "compare": false, 00:18:16.261 "compare_and_write": false, 00:18:16.261 "abort": false, 00:18:16.261 "nvme_admin": false, 00:18:16.261 "nvme_io": false 00:18:16.261 }, 00:18:16.261 "driver_specific": { 00:18:16.261 "ftl": { 00:18:16.261 "base_bdev": "2cdb2f17-8677-45e5-9d72-57a64b2dcf33", 00:18:16.261 "cache": "nvc0n1p0" 00:18:16.261 } 00:18:16.261 } 00:18:16.261 } 00:18:16.261 ]' 00:18:16.261 18:36:16 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:18:16.261 18:36:16 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:18:16.261 18:36:16 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:16.522 [2024-07-23 18:36:16.453186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.522 [2024-07-23 18:36:16.453244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:16.522 [2024-07-23 18:36:16.453260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:16.522 [2024-07-23 18:36:16.453271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.522 [2024-07-23 18:36:16.453313] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:16.522 [2024-07-23 18:36:16.454039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.522 [2024-07-23 18:36:16.454059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:16.522 [2024-07-23 18:36:16.454071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.706 ms 00:18:16.522 [2024-07-23 18:36:16.454081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.522 [2024-07-23 18:36:16.454624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.522 [2024-07-23 18:36:16.454654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:16.522 [2024-07-23 18:36:16.454668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.502 ms 00:18:16.522 [2024-07-23 18:36:16.454677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.522 [2024-07-23 18:36:16.457429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.522 [2024-07-23 18:36:16.457466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:16.522 [2024-07-23 18:36:16.457480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.724 ms 00:18:16.522 [2024-07-23 18:36:16.457490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.522 [2024-07-23 18:36:16.463064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.522 [2024-07-23 18:36:16.463108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:16.522 [2024-07-23 18:36:16.463150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.523 ms 00:18:16.522 [2024-07-23 18:36:16.463161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.522 [2024-07-23 18:36:16.465056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.522 [2024-07-23 18:36:16.465099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:16.522 [2024-07-23 18:36:16.465113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.757 ms 00:18:16.522 [2024-07-23 18:36:16.465122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.522 [2024-07-23 18:36:16.470919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.522 [2024-07-23 18:36:16.470958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:16.522 [2024-07-23 18:36:16.470974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.756 ms 00:18:16.522 [2024-07-23 18:36:16.470984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.522 [2024-07-23 18:36:16.471151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.522 [2024-07-23 18:36:16.471165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:16.522 [2024-07-23 18:36:16.471178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:18:16.523 [2024-07-23 18:36:16.471187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.523 [2024-07-23 18:36:16.473453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.523 [2024-07-23 18:36:16.473490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:16.523 [2024-07-23 18:36:16.473505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.194 ms 00:18:16.523 [2024-07-23 18:36:16.473513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.523 [2024-07-23 18:36:16.475163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.523 [2024-07-23 18:36:16.475208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:16.523 [2024-07-23 18:36:16.475222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.571 ms 00:18:16.523 [2024-07-23 18:36:16.475230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.523 [2024-07-23 18:36:16.476558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.523 [2024-07-23 18:36:16.476615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:16.523 [2024-07-23 18:36:16.476631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.267 ms 00:18:16.523 [2024-07-23 18:36:16.476640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.523 [2024-07-23 18:36:16.477791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.523 [2024-07-23 18:36:16.477828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:16.523 [2024-07-23 18:36:16.477842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.059 ms 00:18:16.523 [2024-07-23 18:36:16.477852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.523 [2024-07-23 18:36:16.477898] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:16.523 [2024-07-23 18:36:16.477916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:16.523 [2024-07-23 18:36:16.477929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:16.523 [2024-07-23 18:36:16.477940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:16.523 [2024-07-23 18:36:16.477955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:16.523 [2024-07-23 18:36:16.477965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:16.523 [2024-07-23 18:36:16.477976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:16.523 [2024-07-23 18:36:16.477987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:16.523 [2024-07-23 18:36:16.477999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:16.523 [2024-07-23 18:36:16.478009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:16.523 [2024-07-23 18:36:16.478021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:16.523 [2024-07-23 18:36:16.478030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:16.523 [2024-07-23 18:36:16.478042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:16.523 [2024-07-23 18:36:16.478052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:16.523 [2024-07-23 18:36:16.478063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:16.523 [2024-07-23 18:36:16.478073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:16.523 [2024-07-23 18:36:16.478085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:16.523 [2024-07-23 18:36:16.478094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:16.523 [2024-07-23 18:36:16.478106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:16.523 [2024-07-23 18:36:16.478116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:16.523 [2024-07-23 18:36:16.478129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:16.523 [2024-07-23 18:36:16.478139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:16.523 [2024-07-23 18:36:16.478150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:16.523 [2024-07-23 18:36:16.478160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:16.523 [2024-07-23 18:36:16.478171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:16.523 [2024-07-23 18:36:16.478181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:16.523 [2024-07-23 18:36:16.478193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:16.523 [2024-07-23 18:36:16.478203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:16.523 [2024-07-23 18:36:16.478215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:16.523 [2024-07-23 18:36:16.478224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:16.523 [2024-07-23 18:36:16.478238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:16.523 [2024-07-23 18:36:16.478248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:16.523 [2024-07-23 18:36:16.478261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:16.523 [2024-07-23 18:36:16.478271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:16.523 [2024-07-23 18:36:16.478283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:16.523 [2024-07-23 18:36:16.478293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:16.523 [2024-07-23 18:36:16.478306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:16.523 [2024-07-23 18:36:16.478315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:16.523 [2024-07-23 18:36:16.478326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:16.523 [2024-07-23 18:36:16.478336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:16.523 [2024-07-23 18:36:16.478346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:16.523 [2024-07-23 18:36:16.478356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:16.523 [2024-07-23 18:36:16.478367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:16.523 [2024-07-23 18:36:16.478378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:16.523 [2024-07-23 18:36:16.478389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:16.523 [2024-07-23 18:36:16.478399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:16.523 [2024-07-23 18:36:16.478410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:16.523 [2024-07-23 18:36:16.478419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:16.523 [2024-07-23 18:36:16.478431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:16.523 [2024-07-23 18:36:16.478440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:16.523 [2024-07-23 18:36:16.478451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:16.524 [2024-07-23 18:36:16.478461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:16.524 [2024-07-23 18:36:16.478474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:16.524 [2024-07-23 18:36:16.478483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:16.524 [2024-07-23 18:36:16.478494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:16.524 [2024-07-23 18:36:16.478503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:16.524 [2024-07-23 18:36:16.478517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:16.524 [2024-07-23 18:36:16.478526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:16.524 [2024-07-23 18:36:16.478538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:16.524 [2024-07-23 18:36:16.478547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:16.524 [2024-07-23 18:36:16.478560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:16.524 [2024-07-23 18:36:16.478686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:16.524 [2024-07-23 18:36:16.478735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:16.524 [2024-07-23 18:36:16.478771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:16.524 [2024-07-23 18:36:16.478821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:16.524 [2024-07-23 18:36:16.478873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:16.524 [2024-07-23 18:36:16.478918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:16.524 [2024-07-23 18:36:16.478980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:16.524 [2024-07-23 18:36:16.479073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:16.524 [2024-07-23 18:36:16.479155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:16.524 [2024-07-23 18:36:16.479268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:16.524 [2024-07-23 18:36:16.479346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:16.524 [2024-07-23 18:36:16.479415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:16.524 [2024-07-23 18:36:16.479493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:16.524 [2024-07-23 18:36:16.479566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:16.524 [2024-07-23 18:36:16.479648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:16.524 [2024-07-23 18:36:16.479732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:16.524 [2024-07-23 18:36:16.479795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:16.524 [2024-07-23 18:36:16.479818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:16.524 [2024-07-23 18:36:16.479833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:16.524 [2024-07-23 18:36:16.479847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:16.524 [2024-07-23 18:36:16.479857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:16.524 [2024-07-23 18:36:16.479869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:16.524 [2024-07-23 18:36:16.479891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:16.524 [2024-07-23 18:36:16.479908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:16.524 [2024-07-23 18:36:16.479917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:16.524 [2024-07-23 18:36:16.479928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:16.524 [2024-07-23 18:36:16.479938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:16.524 [2024-07-23 18:36:16.479949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:16.524 [2024-07-23 18:36:16.479958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:16.524 [2024-07-23 18:36:16.479969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:16.524 [2024-07-23 18:36:16.479979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:16.524 [2024-07-23 18:36:16.479995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:16.524 [2024-07-23 18:36:16.480005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:16.524 [2024-07-23 18:36:16.480016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:16.524 [2024-07-23 18:36:16.480026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:16.524 [2024-07-23 18:36:16.480061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:16.524 [2024-07-23 18:36:16.480071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:16.524 [2024-07-23 18:36:16.480084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:16.524 [2024-07-23 18:36:16.480094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:16.524 [2024-07-23 18:36:16.480108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:16.524 [2024-07-23 18:36:16.480124] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:16.524 [2024-07-23 18:36:16.480136] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: af89930e-19ee-46e5-a594-fa546b86b4cb 00:18:16.524 [2024-07-23 18:36:16.480146] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:16.524 [2024-07-23 18:36:16.480157] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:16.524 [2024-07-23 18:36:16.480165] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:16.524 [2024-07-23 18:36:16.480181] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:16.524 [2024-07-23 18:36:16.480204] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:16.524 [2024-07-23 18:36:16.480229] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:16.524 [2024-07-23 18:36:16.480238] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:16.524 [2024-07-23 18:36:16.480248] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:16.524 [2024-07-23 18:36:16.480255] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:16.524 [2024-07-23 18:36:16.480267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.524 [2024-07-23 18:36:16.480277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:16.524 [2024-07-23 18:36:16.480289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.376 ms 00:18:16.524 [2024-07-23 18:36:16.480298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.524 [2024-07-23 18:36:16.482179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.524 [2024-07-23 18:36:16.482208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:16.524 [2024-07-23 18:36:16.482221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.824 ms 00:18:16.525 [2024-07-23 18:36:16.482231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.525 [2024-07-23 18:36:16.482350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.525 [2024-07-23 18:36:16.482361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:16.525 [2024-07-23 18:36:16.482372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:18:16.525 [2024-07-23 18:36:16.482381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.525 [2024-07-23 18:36:16.489011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:16.525 [2024-07-23 18:36:16.489102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:16.525 [2024-07-23 18:36:16.489139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:16.525 [2024-07-23 18:36:16.489165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.525 [2024-07-23 18:36:16.489315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:16.525 [2024-07-23 18:36:16.489369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:16.525 [2024-07-23 18:36:16.489446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:16.525 [2024-07-23 18:36:16.489517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.525 [2024-07-23 18:36:16.489674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:16.525 [2024-07-23 18:36:16.489731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:16.525 [2024-07-23 18:36:16.489787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:16.525 [2024-07-23 18:36:16.489834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.525 [2024-07-23 18:36:16.489925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:16.525 [2024-07-23 18:36:16.489943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:16.525 [2024-07-23 18:36:16.489956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:16.525 [2024-07-23 18:36:16.489964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.525 [2024-07-23 18:36:16.504204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:16.525 [2024-07-23 18:36:16.504353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:16.525 [2024-07-23 18:36:16.504392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:16.525 [2024-07-23 18:36:16.504420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.525 [2024-07-23 18:36:16.513205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:16.525 [2024-07-23 18:36:16.513339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:16.525 [2024-07-23 18:36:16.513378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:16.525 [2024-07-23 18:36:16.513418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.525 [2024-07-23 18:36:16.513539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:16.525 [2024-07-23 18:36:16.513622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:16.525 [2024-07-23 18:36:16.513705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:16.525 [2024-07-23 18:36:16.513755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.525 [2024-07-23 18:36:16.513859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:16.525 [2024-07-23 18:36:16.513905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:16.525 [2024-07-23 18:36:16.513956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:16.525 [2024-07-23 18:36:16.514002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.525 [2024-07-23 18:36:16.514168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:16.525 [2024-07-23 18:36:16.514224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:16.525 [2024-07-23 18:36:16.514276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:16.525 [2024-07-23 18:36:16.514320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.525 [2024-07-23 18:36:16.514447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:16.525 [2024-07-23 18:36:16.514500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:16.525 [2024-07-23 18:36:16.514524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:16.525 [2024-07-23 18:36:16.514536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.525 [2024-07-23 18:36:16.514636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:16.525 [2024-07-23 18:36:16.514650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:16.525 [2024-07-23 18:36:16.514662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:16.525 [2024-07-23 18:36:16.514672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.525 [2024-07-23 18:36:16.514741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:16.525 [2024-07-23 18:36:16.514753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:16.525 [2024-07-23 18:36:16.514785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:16.525 [2024-07-23 18:36:16.514796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.525 [2024-07-23 18:36:16.515008] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 61.907 ms, result 0 00:18:16.525 true 00:18:16.525 18:36:16 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 89563 00:18:16.525 18:36:16 ftl.ftl_trim -- common/autotest_common.sh@946 -- # '[' -z 89563 ']' 00:18:16.525 18:36:16 ftl.ftl_trim -- common/autotest_common.sh@950 -- # kill -0 89563 00:18:16.525 18:36:16 ftl.ftl_trim -- common/autotest_common.sh@951 -- # uname 00:18:16.525 18:36:16 ftl.ftl_trim -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:16.525 18:36:16 ftl.ftl_trim -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 89563 00:18:16.784 killing process with pid 89563 00:18:16.784 18:36:16 ftl.ftl_trim -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:16.784 18:36:16 ftl.ftl_trim -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:16.784 18:36:16 ftl.ftl_trim -- common/autotest_common.sh@964 -- # echo 'killing process with pid 89563' 00:18:16.784 18:36:16 ftl.ftl_trim -- common/autotest_common.sh@965 -- # kill 89563 00:18:16.784 18:36:16 ftl.ftl_trim -- common/autotest_common.sh@970 -- # wait 89563 00:18:21.036 18:36:20 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:18:21.974 65536+0 records in 00:18:21.974 65536+0 records out 00:18:21.974 268435456 bytes (268 MB, 256 MiB) copied, 0.845584 s, 317 MB/s 00:18:21.974 18:36:21 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:21.974 [2024-07-23 18:36:21.905124] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:18:21.974 [2024-07-23 18:36:21.905248] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89767 ] 00:18:22.234 [2024-07-23 18:36:22.053112] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:22.234 [2024-07-23 18:36:22.098429] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:22.234 [2024-07-23 18:36:22.200670] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:22.234 [2024-07-23 18:36:22.200761] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:22.495 [2024-07-23 18:36:22.349495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.495 [2024-07-23 18:36:22.349554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:22.495 [2024-07-23 18:36:22.349583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:22.495 [2024-07-23 18:36:22.349602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.495 [2024-07-23 18:36:22.351616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.495 [2024-07-23 18:36:22.351658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:22.495 [2024-07-23 18:36:22.351670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.995 ms 00:18:22.495 [2024-07-23 18:36:22.351688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.495 [2024-07-23 18:36:22.351766] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:22.495 [2024-07-23 18:36:22.351972] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:22.495 [2024-07-23 18:36:22.351998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.495 [2024-07-23 18:36:22.352007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:22.495 [2024-07-23 18:36:22.352021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.240 ms 00:18:22.495 [2024-07-23 18:36:22.352030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.495 [2024-07-23 18:36:22.353546] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:22.495 [2024-07-23 18:36:22.356083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.495 [2024-07-23 18:36:22.356124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:22.495 [2024-07-23 18:36:22.356137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.544 ms 00:18:22.495 [2024-07-23 18:36:22.356147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.495 [2024-07-23 18:36:22.356228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.495 [2024-07-23 18:36:22.356242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:22.495 [2024-07-23 18:36:22.356251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:18:22.495 [2024-07-23 18:36:22.356273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.495 [2024-07-23 18:36:22.363095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.495 [2024-07-23 18:36:22.363130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:22.495 [2024-07-23 18:36:22.363141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.781 ms 00:18:22.495 [2024-07-23 18:36:22.363151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.495 [2024-07-23 18:36:22.363281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.495 [2024-07-23 18:36:22.363298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:22.495 [2024-07-23 18:36:22.363318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:18:22.495 [2024-07-23 18:36:22.363331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.495 [2024-07-23 18:36:22.363369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.495 [2024-07-23 18:36:22.363382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:22.495 [2024-07-23 18:36:22.363392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:18:22.495 [2024-07-23 18:36:22.363411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.495 [2024-07-23 18:36:22.363438] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:22.495 [2024-07-23 18:36:22.365154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.495 [2024-07-23 18:36:22.365186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:22.495 [2024-07-23 18:36:22.365201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.726 ms 00:18:22.495 [2024-07-23 18:36:22.365210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.495 [2024-07-23 18:36:22.365259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.495 [2024-07-23 18:36:22.365283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:22.495 [2024-07-23 18:36:22.365293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:22.495 [2024-07-23 18:36:22.365302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.495 [2024-07-23 18:36:22.365324] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:22.495 [2024-07-23 18:36:22.365347] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:22.495 [2024-07-23 18:36:22.365394] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:22.495 [2024-07-23 18:36:22.365420] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:18:22.495 [2024-07-23 18:36:22.365503] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:22.495 [2024-07-23 18:36:22.365524] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:22.495 [2024-07-23 18:36:22.365536] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:18:22.495 [2024-07-23 18:36:22.365547] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:22.495 [2024-07-23 18:36:22.365558] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:22.495 [2024-07-23 18:36:22.365585] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:22.495 [2024-07-23 18:36:22.365595] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:22.495 [2024-07-23 18:36:22.365604] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:22.495 [2024-07-23 18:36:22.365616] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:22.495 [2024-07-23 18:36:22.365627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.495 [2024-07-23 18:36:22.365636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:22.495 [2024-07-23 18:36:22.365645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.306 ms 00:18:22.495 [2024-07-23 18:36:22.365654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.495 [2024-07-23 18:36:22.365743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.495 [2024-07-23 18:36:22.365754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:22.495 [2024-07-23 18:36:22.365764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:18:22.495 [2024-07-23 18:36:22.365782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.495 [2024-07-23 18:36:22.365868] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:22.495 [2024-07-23 18:36:22.365881] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:22.495 [2024-07-23 18:36:22.365890] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:22.496 [2024-07-23 18:36:22.365898] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:22.496 [2024-07-23 18:36:22.365908] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:22.496 [2024-07-23 18:36:22.365916] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:22.496 [2024-07-23 18:36:22.365925] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:22.496 [2024-07-23 18:36:22.365935] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:22.496 [2024-07-23 18:36:22.365944] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:22.496 [2024-07-23 18:36:22.365952] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:22.496 [2024-07-23 18:36:22.365959] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:22.496 [2024-07-23 18:36:22.365967] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:22.496 [2024-07-23 18:36:22.365979] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:22.496 [2024-07-23 18:36:22.365989] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:22.496 [2024-07-23 18:36:22.365999] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:22.496 [2024-07-23 18:36:22.366007] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:22.496 [2024-07-23 18:36:22.366015] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:22.496 [2024-07-23 18:36:22.366023] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:22.496 [2024-07-23 18:36:22.366030] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:22.496 [2024-07-23 18:36:22.366038] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:22.496 [2024-07-23 18:36:22.366046] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:22.496 [2024-07-23 18:36:22.366054] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:22.496 [2024-07-23 18:36:22.366061] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:22.496 [2024-07-23 18:36:22.366068] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:22.496 [2024-07-23 18:36:22.366076] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:22.496 [2024-07-23 18:36:22.366084] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:22.496 [2024-07-23 18:36:22.366092] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:22.496 [2024-07-23 18:36:22.366100] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:22.496 [2024-07-23 18:36:22.366112] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:22.496 [2024-07-23 18:36:22.366121] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:22.496 [2024-07-23 18:36:22.366128] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:22.496 [2024-07-23 18:36:22.366135] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:22.496 [2024-07-23 18:36:22.366143] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:22.496 [2024-07-23 18:36:22.366150] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:22.496 [2024-07-23 18:36:22.366158] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:22.496 [2024-07-23 18:36:22.366165] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:22.496 [2024-07-23 18:36:22.366173] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:22.496 [2024-07-23 18:36:22.366180] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:22.496 [2024-07-23 18:36:22.366188] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:22.496 [2024-07-23 18:36:22.366196] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:22.496 [2024-07-23 18:36:22.366204] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:22.496 [2024-07-23 18:36:22.366212] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:22.496 [2024-07-23 18:36:22.366220] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:22.496 [2024-07-23 18:36:22.366227] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:22.496 [2024-07-23 18:36:22.366238] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:22.496 [2024-07-23 18:36:22.366248] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:22.496 [2024-07-23 18:36:22.366256] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:22.496 [2024-07-23 18:36:22.366264] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:22.496 [2024-07-23 18:36:22.366273] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:22.496 [2024-07-23 18:36:22.366281] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:22.496 [2024-07-23 18:36:22.366289] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:22.496 [2024-07-23 18:36:22.366296] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:22.496 [2024-07-23 18:36:22.366304] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:22.496 [2024-07-23 18:36:22.366313] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:22.496 [2024-07-23 18:36:22.366323] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:22.496 [2024-07-23 18:36:22.366332] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:22.496 [2024-07-23 18:36:22.366340] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:22.496 [2024-07-23 18:36:22.366347] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:22.496 [2024-07-23 18:36:22.366355] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:22.496 [2024-07-23 18:36:22.366364] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:22.496 [2024-07-23 18:36:22.366374] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:22.496 [2024-07-23 18:36:22.366383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:22.496 [2024-07-23 18:36:22.366391] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:22.496 [2024-07-23 18:36:22.366399] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:22.496 [2024-07-23 18:36:22.366424] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:22.496 [2024-07-23 18:36:22.366433] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:22.496 [2024-07-23 18:36:22.366441] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:22.496 [2024-07-23 18:36:22.366449] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:22.496 [2024-07-23 18:36:22.366458] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:22.496 [2024-07-23 18:36:22.366465] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:22.496 [2024-07-23 18:36:22.366477] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:22.496 [2024-07-23 18:36:22.366498] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:22.496 [2024-07-23 18:36:22.366508] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:22.496 [2024-07-23 18:36:22.366519] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:22.496 [2024-07-23 18:36:22.366527] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:22.496 [2024-07-23 18:36:22.366537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.496 [2024-07-23 18:36:22.366549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:22.496 [2024-07-23 18:36:22.366593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.719 ms 00:18:22.496 [2024-07-23 18:36:22.366602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.496 [2024-07-23 18:36:22.388054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.496 [2024-07-23 18:36:22.388163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:22.496 [2024-07-23 18:36:22.388217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.426 ms 00:18:22.496 [2024-07-23 18:36:22.388290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.496 [2024-07-23 18:36:22.388506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.496 [2024-07-23 18:36:22.388563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:22.496 [2024-07-23 18:36:22.388628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:18:22.496 [2024-07-23 18:36:22.388673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.496 [2024-07-23 18:36:22.398661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.496 [2024-07-23 18:36:22.398763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:22.496 [2024-07-23 18:36:22.398822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.917 ms 00:18:22.496 [2024-07-23 18:36:22.398871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.496 [2024-07-23 18:36:22.399010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.496 [2024-07-23 18:36:22.399069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:22.496 [2024-07-23 18:36:22.399125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:22.496 [2024-07-23 18:36:22.399181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.496 [2024-07-23 18:36:22.399744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.496 [2024-07-23 18:36:22.399829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:22.496 [2024-07-23 18:36:22.399887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.475 ms 00:18:22.496 [2024-07-23 18:36:22.399946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.496 [2024-07-23 18:36:22.400146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.496 [2024-07-23 18:36:22.400222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:22.496 [2024-07-23 18:36:22.400278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:18:22.497 [2024-07-23 18:36:22.400333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.497 [2024-07-23 18:36:22.406671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.497 [2024-07-23 18:36:22.406771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:22.497 [2024-07-23 18:36:22.406823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.272 ms 00:18:22.497 [2024-07-23 18:36:22.406869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.497 [2024-07-23 18:36:22.409592] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:18:22.497 [2024-07-23 18:36:22.409691] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:22.497 [2024-07-23 18:36:22.409761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.497 [2024-07-23 18:36:22.409834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:22.497 [2024-07-23 18:36:22.409892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.740 ms 00:18:22.497 [2024-07-23 18:36:22.409932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.497 [2024-07-23 18:36:22.423032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.497 [2024-07-23 18:36:22.423143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:22.497 [2024-07-23 18:36:22.423207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.012 ms 00:18:22.497 [2024-07-23 18:36:22.423250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.497 [2024-07-23 18:36:22.425309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.497 [2024-07-23 18:36:22.425392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:22.497 [2024-07-23 18:36:22.425459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.912 ms 00:18:22.497 [2024-07-23 18:36:22.425493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.497 [2024-07-23 18:36:22.427119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.497 [2024-07-23 18:36:22.427204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:22.497 [2024-07-23 18:36:22.427257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.515 ms 00:18:22.497 [2024-07-23 18:36:22.427302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.497 [2024-07-23 18:36:22.427683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.497 [2024-07-23 18:36:22.427784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:22.497 [2024-07-23 18:36:22.427838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.233 ms 00:18:22.497 [2024-07-23 18:36:22.427897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.497 [2024-07-23 18:36:22.450475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.497 [2024-07-23 18:36:22.450672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:22.497 [2024-07-23 18:36:22.450729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.549 ms 00:18:22.497 [2024-07-23 18:36:22.450787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.497 [2024-07-23 18:36:22.457098] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:22.497 [2024-07-23 18:36:22.474119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.497 [2024-07-23 18:36:22.474267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:22.497 [2024-07-23 18:36:22.474403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.230 ms 00:18:22.497 [2024-07-23 18:36:22.474471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.497 [2024-07-23 18:36:22.474678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.497 [2024-07-23 18:36:22.474742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:22.497 [2024-07-23 18:36:22.474825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:18:22.497 [2024-07-23 18:36:22.474887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.497 [2024-07-23 18:36:22.475008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.497 [2024-07-23 18:36:22.475088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:22.497 [2024-07-23 18:36:22.475156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:18:22.497 [2024-07-23 18:36:22.475189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.497 [2024-07-23 18:36:22.475243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.497 [2024-07-23 18:36:22.475255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:22.497 [2024-07-23 18:36:22.475265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:22.497 [2024-07-23 18:36:22.475294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.497 [2024-07-23 18:36:22.475350] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:22.497 [2024-07-23 18:36:22.475363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.497 [2024-07-23 18:36:22.475373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:22.497 [2024-07-23 18:36:22.475383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:18:22.497 [2024-07-23 18:36:22.475392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.497 [2024-07-23 18:36:22.479374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.497 [2024-07-23 18:36:22.479432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:22.497 [2024-07-23 18:36:22.479444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.967 ms 00:18:22.497 [2024-07-23 18:36:22.479466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.497 [2024-07-23 18:36:22.479550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.497 [2024-07-23 18:36:22.479562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:22.497 [2024-07-23 18:36:22.479585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:18:22.497 [2024-07-23 18:36:22.479594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.497 [2024-07-23 18:36:22.480514] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:22.497 [2024-07-23 18:36:22.481537] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 130.996 ms, result 0 00:18:22.497 [2024-07-23 18:36:22.482348] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:22.497 [2024-07-23 18:36:22.491527] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:32.022  Copying: 28/256 [MB] (28 MBps) Copying: 55/256 [MB] (27 MBps) Copying: 83/256 [MB] (28 MBps) Copying: 111/256 [MB] (27 MBps) Copying: 138/256 [MB] (26 MBps) Copying: 165/256 [MB] (27 MBps) Copying: 192/256 [MB] (27 MBps) Copying: 219/256 [MB] (26 MBps) Copying: 247/256 [MB] (27 MBps) Copying: 256/256 [MB] (average 27 MBps)[2024-07-23 18:36:31.797291] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:32.022 [2024-07-23 18:36:31.798907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.022 [2024-07-23 18:36:31.798961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:32.022 [2024-07-23 18:36:31.798977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:32.022 [2024-07-23 18:36:31.798986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.022 [2024-07-23 18:36:31.799009] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:32.022 [2024-07-23 18:36:31.799687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.022 [2024-07-23 18:36:31.799706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:32.022 [2024-07-23 18:36:31.799717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.664 ms 00:18:32.022 [2024-07-23 18:36:31.799725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.022 [2024-07-23 18:36:31.801611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.022 [2024-07-23 18:36:31.801657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:32.022 [2024-07-23 18:36:31.801669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.868 ms 00:18:32.022 [2024-07-23 18:36:31.801686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.022 [2024-07-23 18:36:31.807908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.022 [2024-07-23 18:36:31.807949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:32.022 [2024-07-23 18:36:31.807976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.196 ms 00:18:32.022 [2024-07-23 18:36:31.807984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.022 [2024-07-23 18:36:31.813535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.022 [2024-07-23 18:36:31.813582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:32.022 [2024-07-23 18:36:31.813595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.508 ms 00:18:32.022 [2024-07-23 18:36:31.813611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.022 [2024-07-23 18:36:31.814949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.022 [2024-07-23 18:36:31.814986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:32.022 [2024-07-23 18:36:31.814997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.301 ms 00:18:32.022 [2024-07-23 18:36:31.815005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.022 [2024-07-23 18:36:31.819703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.022 [2024-07-23 18:36:31.819740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:32.022 [2024-07-23 18:36:31.819752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.676 ms 00:18:32.022 [2024-07-23 18:36:31.819773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.022 [2024-07-23 18:36:31.819890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.022 [2024-07-23 18:36:31.819902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:32.022 [2024-07-23 18:36:31.819912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:18:32.022 [2024-07-23 18:36:31.819924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.022 [2024-07-23 18:36:31.822177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.022 [2024-07-23 18:36:31.822213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:32.022 [2024-07-23 18:36:31.822223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.238 ms 00:18:32.022 [2024-07-23 18:36:31.822232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.022 [2024-07-23 18:36:31.823805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.022 [2024-07-23 18:36:31.823842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:32.022 [2024-07-23 18:36:31.823852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.545 ms 00:18:32.022 [2024-07-23 18:36:31.823860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.022 [2024-07-23 18:36:31.825051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.022 [2024-07-23 18:36:31.825091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:32.022 [2024-07-23 18:36:31.825102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.162 ms 00:18:32.022 [2024-07-23 18:36:31.825110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.022 [2024-07-23 18:36:31.826211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.022 [2024-07-23 18:36:31.826252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:32.022 [2024-07-23 18:36:31.826264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.043 ms 00:18:32.022 [2024-07-23 18:36:31.826272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.022 [2024-07-23 18:36:31.826301] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:32.022 [2024-07-23 18:36:31.826317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:32.022 [2024-07-23 18:36:31.826328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:32.022 [2024-07-23 18:36:31.826338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:32.022 [2024-07-23 18:36:31.826347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:32.022 [2024-07-23 18:36:31.826356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:32.022 [2024-07-23 18:36:31.826365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:32.022 [2024-07-23 18:36:31.826374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:32.022 [2024-07-23 18:36:31.826383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:32.022 [2024-07-23 18:36:31.826393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:32.022 [2024-07-23 18:36:31.826402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:32.022 [2024-07-23 18:36:31.826413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:32.022 [2024-07-23 18:36:31.826422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:32.022 [2024-07-23 18:36:31.826431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:32.022 [2024-07-23 18:36:31.826440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:32.022 [2024-07-23 18:36:31.826449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:32.022 [2024-07-23 18:36:31.826457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:32.022 [2024-07-23 18:36:31.826466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:32.022 [2024-07-23 18:36:31.826475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:32.022 [2024-07-23 18:36:31.826484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:32.022 [2024-07-23 18:36:31.826492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:32.022 [2024-07-23 18:36:31.826501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:32.022 [2024-07-23 18:36:31.826509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:32.022 [2024-07-23 18:36:31.826519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:32.022 [2024-07-23 18:36:31.826529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:32.022 [2024-07-23 18:36:31.826539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:32.022 [2024-07-23 18:36:31.826548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:32.022 [2024-07-23 18:36:31.826559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:32.022 [2024-07-23 18:36:31.826583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:32.022 [2024-07-23 18:36:31.826593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:32.022 [2024-07-23 18:36:31.826603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:32.022 [2024-07-23 18:36:31.826612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:32.022 [2024-07-23 18:36:31.826621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:32.022 [2024-07-23 18:36:31.826630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:32.022 [2024-07-23 18:36:31.826639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:32.022 [2024-07-23 18:36:31.826647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:32.022 [2024-07-23 18:36:31.826656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:32.022 [2024-07-23 18:36:31.826664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:32.022 [2024-07-23 18:36:31.826674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:32.022 [2024-07-23 18:36:31.826682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:32.022 [2024-07-23 18:36:31.826692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:32.022 [2024-07-23 18:36:31.826701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:32.023 [2024-07-23 18:36:31.826709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:32.023 [2024-07-23 18:36:31.826718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:32.023 [2024-07-23 18:36:31.826743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:32.023 [2024-07-23 18:36:31.826753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:32.023 [2024-07-23 18:36:31.826761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:32.023 [2024-07-23 18:36:31.826770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:32.023 [2024-07-23 18:36:31.826778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:32.023 [2024-07-23 18:36:31.826787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:32.023 [2024-07-23 18:36:31.826796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:32.023 [2024-07-23 18:36:31.826804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:32.023 [2024-07-23 18:36:31.826813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:32.023 [2024-07-23 18:36:31.826821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:32.023 [2024-07-23 18:36:31.826830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:32.023 [2024-07-23 18:36:31.826838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:32.023 [2024-07-23 18:36:31.826847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:32.023 [2024-07-23 18:36:31.826856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:32.023 [2024-07-23 18:36:31.826877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:32.023 [2024-07-23 18:36:31.826885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:32.023 [2024-07-23 18:36:31.826894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:32.023 [2024-07-23 18:36:31.826902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:32.023 [2024-07-23 18:36:31.826910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:32.023 [2024-07-23 18:36:31.826918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:32.023 [2024-07-23 18:36:31.826928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:32.023 [2024-07-23 18:36:31.826936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:32.023 [2024-07-23 18:36:31.826957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:32.023 [2024-07-23 18:36:31.826967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:32.023 [2024-07-23 18:36:31.826975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:32.023 [2024-07-23 18:36:31.826983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:32.023 [2024-07-23 18:36:31.826991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:32.023 [2024-07-23 18:36:31.826999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:32.023 [2024-07-23 18:36:31.827008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:32.023 [2024-07-23 18:36:31.827016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:32.023 [2024-07-23 18:36:31.827025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:32.023 [2024-07-23 18:36:31.827033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:32.023 [2024-07-23 18:36:31.827041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:32.023 [2024-07-23 18:36:31.827049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:32.023 [2024-07-23 18:36:31.827058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:32.023 [2024-07-23 18:36:31.827066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:32.023 [2024-07-23 18:36:31.827074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:32.023 [2024-07-23 18:36:31.827082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:32.023 [2024-07-23 18:36:31.827090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:32.023 [2024-07-23 18:36:31.827098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:32.023 [2024-07-23 18:36:31.827106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:32.023 [2024-07-23 18:36:31.827114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:32.023 [2024-07-23 18:36:31.827122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:32.023 [2024-07-23 18:36:31.827130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:32.023 [2024-07-23 18:36:31.827139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:32.023 [2024-07-23 18:36:31.827147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:32.023 [2024-07-23 18:36:31.827159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:32.023 [2024-07-23 18:36:31.827173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:32.023 [2024-07-23 18:36:31.827185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:32.023 [2024-07-23 18:36:31.827198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:32.023 [2024-07-23 18:36:31.827217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:32.023 [2024-07-23 18:36:31.827247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:32.023 [2024-07-23 18:36:31.827262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:32.023 [2024-07-23 18:36:31.827273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:32.023 [2024-07-23 18:36:31.827283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:32.023 [2024-07-23 18:36:31.827292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:32.023 [2024-07-23 18:36:31.827300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:32.023 [2024-07-23 18:36:31.827316] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:32.023 [2024-07-23 18:36:31.827324] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: af89930e-19ee-46e5-a594-fa546b86b4cb 00:18:32.023 [2024-07-23 18:36:31.827333] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:32.023 [2024-07-23 18:36:31.827341] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:32.023 [2024-07-23 18:36:31.827350] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:32.023 [2024-07-23 18:36:31.827363] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:32.023 [2024-07-23 18:36:31.827377] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:32.023 [2024-07-23 18:36:31.827391] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:32.023 [2024-07-23 18:36:31.827407] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:32.023 [2024-07-23 18:36:31.827414] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:32.023 [2024-07-23 18:36:31.827424] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:32.023 [2024-07-23 18:36:31.827439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.023 [2024-07-23 18:36:31.827454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:32.023 [2024-07-23 18:36:31.827478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.141 ms 00:18:32.023 [2024-07-23 18:36:31.827486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.023 [2024-07-23 18:36:31.829436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.023 [2024-07-23 18:36:31.829473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:32.023 [2024-07-23 18:36:31.829492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.915 ms 00:18:32.023 [2024-07-23 18:36:31.829500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.023 [2024-07-23 18:36:31.829638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.023 [2024-07-23 18:36:31.829659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:32.023 [2024-07-23 18:36:31.829669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:18:32.023 [2024-07-23 18:36:31.829677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.023 [2024-07-23 18:36:31.835381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:32.023 [2024-07-23 18:36:31.835409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:32.023 [2024-07-23 18:36:31.835432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:32.023 [2024-07-23 18:36:31.835444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.023 [2024-07-23 18:36:31.835495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:32.023 [2024-07-23 18:36:31.835506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:32.023 [2024-07-23 18:36:31.835515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:32.023 [2024-07-23 18:36:31.835523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.023 [2024-07-23 18:36:31.835568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:32.023 [2024-07-23 18:36:31.835580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:32.023 [2024-07-23 18:36:31.835607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:32.023 [2024-07-23 18:36:31.835616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.023 [2024-07-23 18:36:31.835640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:32.023 [2024-07-23 18:36:31.835651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:32.024 [2024-07-23 18:36:31.835661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:32.024 [2024-07-23 18:36:31.835669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.024 [2024-07-23 18:36:31.848643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:32.024 [2024-07-23 18:36:31.848698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:32.024 [2024-07-23 18:36:31.848712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:32.024 [2024-07-23 18:36:31.848721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.024 [2024-07-23 18:36:31.856985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:32.024 [2024-07-23 18:36:31.857038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:32.024 [2024-07-23 18:36:31.857051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:32.024 [2024-07-23 18:36:31.857074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.024 [2024-07-23 18:36:31.857112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:32.024 [2024-07-23 18:36:31.857122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:32.024 [2024-07-23 18:36:31.857132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:32.024 [2024-07-23 18:36:31.857141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.024 [2024-07-23 18:36:31.857170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:32.024 [2024-07-23 18:36:31.857185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:32.024 [2024-07-23 18:36:31.857194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:32.024 [2024-07-23 18:36:31.857203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.024 [2024-07-23 18:36:31.857281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:32.024 [2024-07-23 18:36:31.857293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:32.024 [2024-07-23 18:36:31.857303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:32.024 [2024-07-23 18:36:31.857311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.024 [2024-07-23 18:36:31.857355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:32.024 [2024-07-23 18:36:31.857367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:32.024 [2024-07-23 18:36:31.857380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:32.024 [2024-07-23 18:36:31.857388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.024 [2024-07-23 18:36:31.857439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:32.024 [2024-07-23 18:36:31.857449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:32.024 [2024-07-23 18:36:31.857458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:32.024 [2024-07-23 18:36:31.857466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.024 [2024-07-23 18:36:31.857511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:32.024 [2024-07-23 18:36:31.857525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:32.024 [2024-07-23 18:36:31.857535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:32.024 [2024-07-23 18:36:31.857543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.024 [2024-07-23 18:36:31.857716] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 58.896 ms, result 0 00:18:32.593 00:18:32.593 00:18:32.593 18:36:32 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:18:32.593 18:36:32 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=89883 00:18:32.593 18:36:32 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 89883 00:18:32.593 18:36:32 ftl.ftl_trim -- common/autotest_common.sh@827 -- # '[' -z 89883 ']' 00:18:32.593 18:36:32 ftl.ftl_trim -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:32.593 18:36:32 ftl.ftl_trim -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:32.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:32.593 18:36:32 ftl.ftl_trim -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:32.593 18:36:32 ftl.ftl_trim -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:32.593 18:36:32 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:18:32.593 [2024-07-23 18:36:32.519798] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:18:32.593 [2024-07-23 18:36:32.519920] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89883 ] 00:18:32.853 [2024-07-23 18:36:32.669059] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:32.853 [2024-07-23 18:36:32.714443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:33.423 18:36:33 ftl.ftl_trim -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:33.423 18:36:33 ftl.ftl_trim -- common/autotest_common.sh@860 -- # return 0 00:18:33.423 18:36:33 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:18:33.683 [2024-07-23 18:36:33.482757] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:33.683 [2024-07-23 18:36:33.482874] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:33.683 [2024-07-23 18:36:33.648483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.683 [2024-07-23 18:36:33.648655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:33.683 [2024-07-23 18:36:33.648705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:33.683 [2024-07-23 18:36:33.648738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.683 [2024-07-23 18:36:33.651004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.683 [2024-07-23 18:36:33.651100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:33.683 [2024-07-23 18:36:33.651152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.215 ms 00:18:33.684 [2024-07-23 18:36:33.651177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.684 [2024-07-23 18:36:33.651318] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:33.684 [2024-07-23 18:36:33.651693] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:33.684 [2024-07-23 18:36:33.651791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.684 [2024-07-23 18:36:33.651835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:33.684 [2024-07-23 18:36:33.651885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.493 ms 00:18:33.684 [2024-07-23 18:36:33.651928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.684 [2024-07-23 18:36:33.653539] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:33.684 [2024-07-23 18:36:33.656211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.684 [2024-07-23 18:36:33.656306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:33.684 [2024-07-23 18:36:33.656377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.682 ms 00:18:33.684 [2024-07-23 18:36:33.656429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.684 [2024-07-23 18:36:33.656587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.684 [2024-07-23 18:36:33.656653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:33.684 [2024-07-23 18:36:33.656723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:18:33.684 [2024-07-23 18:36:33.656790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.684 [2024-07-23 18:36:33.663846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.684 [2024-07-23 18:36:33.663931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:33.684 [2024-07-23 18:36:33.663982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.964 ms 00:18:33.684 [2024-07-23 18:36:33.664063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.684 [2024-07-23 18:36:33.664262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.684 [2024-07-23 18:36:33.664330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:33.684 [2024-07-23 18:36:33.664395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:18:33.684 [2024-07-23 18:36:33.664441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.684 [2024-07-23 18:36:33.664485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.684 [2024-07-23 18:36:33.664498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:33.684 [2024-07-23 18:36:33.664509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:18:33.684 [2024-07-23 18:36:33.664520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.684 [2024-07-23 18:36:33.664551] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:33.684 [2024-07-23 18:36:33.666324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.684 [2024-07-23 18:36:33.666361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:33.684 [2024-07-23 18:36:33.666377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.781 ms 00:18:33.684 [2024-07-23 18:36:33.666390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.684 [2024-07-23 18:36:33.666446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.684 [2024-07-23 18:36:33.666465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:33.684 [2024-07-23 18:36:33.666478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:33.684 [2024-07-23 18:36:33.666487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.684 [2024-07-23 18:36:33.666523] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:33.684 [2024-07-23 18:36:33.666556] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:33.684 [2024-07-23 18:36:33.666615] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:33.684 [2024-07-23 18:36:33.666636] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:18:33.684 [2024-07-23 18:36:33.666726] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:33.684 [2024-07-23 18:36:33.666739] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:33.684 [2024-07-23 18:36:33.666757] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:18:33.684 [2024-07-23 18:36:33.666769] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:33.684 [2024-07-23 18:36:33.666783] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:33.684 [2024-07-23 18:36:33.666816] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:33.684 [2024-07-23 18:36:33.666830] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:33.684 [2024-07-23 18:36:33.666839] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:33.684 [2024-07-23 18:36:33.666860] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:33.684 [2024-07-23 18:36:33.666873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.684 [2024-07-23 18:36:33.666884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:33.684 [2024-07-23 18:36:33.666894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.356 ms 00:18:33.684 [2024-07-23 18:36:33.666905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.684 [2024-07-23 18:36:33.666995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.684 [2024-07-23 18:36:33.667017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:33.684 [2024-07-23 18:36:33.667035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:18:33.684 [2024-07-23 18:36:33.667054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.684 [2024-07-23 18:36:33.667176] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:33.684 [2024-07-23 18:36:33.667200] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:33.684 [2024-07-23 18:36:33.667226] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:33.684 [2024-07-23 18:36:33.667244] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:33.684 [2024-07-23 18:36:33.667255] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:33.684 [2024-07-23 18:36:33.667268] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:33.684 [2024-07-23 18:36:33.667277] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:33.684 [2024-07-23 18:36:33.667291] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:33.684 [2024-07-23 18:36:33.667307] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:33.684 [2024-07-23 18:36:33.667321] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:33.684 [2024-07-23 18:36:33.667336] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:33.684 [2024-07-23 18:36:33.667353] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:33.684 [2024-07-23 18:36:33.667366] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:33.684 [2024-07-23 18:36:33.667382] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:33.684 [2024-07-23 18:36:33.667394] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:33.684 [2024-07-23 18:36:33.667407] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:33.684 [2024-07-23 18:36:33.667421] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:33.684 [2024-07-23 18:36:33.667435] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:33.684 [2024-07-23 18:36:33.667446] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:33.684 [2024-07-23 18:36:33.667464] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:33.684 [2024-07-23 18:36:33.667474] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:33.684 [2024-07-23 18:36:33.667494] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:33.684 [2024-07-23 18:36:33.667507] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:33.684 [2024-07-23 18:36:33.667520] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:33.684 [2024-07-23 18:36:33.667533] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:33.684 [2024-07-23 18:36:33.667548] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:33.684 [2024-07-23 18:36:33.667558] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:33.684 [2024-07-23 18:36:33.667594] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:33.684 [2024-07-23 18:36:33.667605] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:33.684 [2024-07-23 18:36:33.667616] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:33.684 [2024-07-23 18:36:33.667625] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:33.684 [2024-07-23 18:36:33.667640] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:33.684 [2024-07-23 18:36:33.667655] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:33.684 [2024-07-23 18:36:33.667669] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:33.684 [2024-07-23 18:36:33.667682] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:33.684 [2024-07-23 18:36:33.667696] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:33.684 [2024-07-23 18:36:33.667708] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:33.684 [2024-07-23 18:36:33.667728] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:33.684 [2024-07-23 18:36:33.667738] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:33.684 [2024-07-23 18:36:33.667753] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:33.684 [2024-07-23 18:36:33.667768] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:33.684 [2024-07-23 18:36:33.667779] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:33.684 [2024-07-23 18:36:33.667793] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:33.684 [2024-07-23 18:36:33.667810] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:33.684 [2024-07-23 18:36:33.667822] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:33.685 [2024-07-23 18:36:33.667854] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:33.685 [2024-07-23 18:36:33.667880] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:33.685 [2024-07-23 18:36:33.667897] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:33.685 [2024-07-23 18:36:33.667910] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:33.685 [2024-07-23 18:36:33.667927] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:33.685 [2024-07-23 18:36:33.667937] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:33.685 [2024-07-23 18:36:33.667953] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:33.685 [2024-07-23 18:36:33.667966] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:33.685 [2024-07-23 18:36:33.667983] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:33.685 [2024-07-23 18:36:33.668002] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:33.685 [2024-07-23 18:36:33.668016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:33.685 [2024-07-23 18:36:33.668027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:33.685 [2024-07-23 18:36:33.668044] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:33.685 [2024-07-23 18:36:33.668058] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:33.685 [2024-07-23 18:36:33.668075] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:33.685 [2024-07-23 18:36:33.668087] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:33.685 [2024-07-23 18:36:33.668102] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:33.685 [2024-07-23 18:36:33.668117] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:33.685 [2024-07-23 18:36:33.668129] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:33.685 [2024-07-23 18:36:33.668143] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:33.685 [2024-07-23 18:36:33.668161] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:33.685 [2024-07-23 18:36:33.668171] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:33.685 [2024-07-23 18:36:33.668189] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:33.685 [2024-07-23 18:36:33.668202] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:33.685 [2024-07-23 18:36:33.668217] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:33.685 [2024-07-23 18:36:33.668232] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:33.685 [2024-07-23 18:36:33.668249] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:33.685 [2024-07-23 18:36:33.668259] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:33.685 [2024-07-23 18:36:33.668277] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:33.685 [2024-07-23 18:36:33.668289] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:33.685 [2024-07-23 18:36:33.668304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.685 [2024-07-23 18:36:33.668320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:33.685 [2024-07-23 18:36:33.668338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.184 ms 00:18:33.685 [2024-07-23 18:36:33.668366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.685 [2024-07-23 18:36:33.681010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.685 [2024-07-23 18:36:33.681079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:33.685 [2024-07-23 18:36:33.681097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.552 ms 00:18:33.685 [2024-07-23 18:36:33.681113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.685 [2024-07-23 18:36:33.681269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.685 [2024-07-23 18:36:33.681289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:33.685 [2024-07-23 18:36:33.681303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:18:33.685 [2024-07-23 18:36:33.681321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.685 [2024-07-23 18:36:33.691972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.685 [2024-07-23 18:36:33.692024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:33.685 [2024-07-23 18:36:33.692043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.641 ms 00:18:33.685 [2024-07-23 18:36:33.692053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.685 [2024-07-23 18:36:33.692161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.685 [2024-07-23 18:36:33.692173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:33.685 [2024-07-23 18:36:33.692189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:33.685 [2024-07-23 18:36:33.692199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.685 [2024-07-23 18:36:33.692686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.685 [2024-07-23 18:36:33.692706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:33.685 [2024-07-23 18:36:33.692722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.460 ms 00:18:33.685 [2024-07-23 18:36:33.692733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.685 [2024-07-23 18:36:33.692857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.685 [2024-07-23 18:36:33.692870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:33.685 [2024-07-23 18:36:33.692896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:18:33.685 [2024-07-23 18:36:33.692908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.685 [2024-07-23 18:36:33.700240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.685 [2024-07-23 18:36:33.700293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:33.685 [2024-07-23 18:36:33.700309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.318 ms 00:18:33.685 [2024-07-23 18:36:33.700325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.685 [2024-07-23 18:36:33.703037] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:18:33.685 [2024-07-23 18:36:33.703079] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:33.685 [2024-07-23 18:36:33.703096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.685 [2024-07-23 18:36:33.703106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:33.685 [2024-07-23 18:36:33.703118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.645 ms 00:18:33.685 [2024-07-23 18:36:33.703127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.685 [2024-07-23 18:36:33.715673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.685 [2024-07-23 18:36:33.715713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:33.685 [2024-07-23 18:36:33.715729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.513 ms 00:18:33.685 [2024-07-23 18:36:33.715739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.685 [2024-07-23 18:36:33.717711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.685 [2024-07-23 18:36:33.717749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:33.685 [2024-07-23 18:36:33.717763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.886 ms 00:18:33.685 [2024-07-23 18:36:33.717772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.685 [2024-07-23 18:36:33.719402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.685 [2024-07-23 18:36:33.719441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:33.685 [2024-07-23 18:36:33.719456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.582 ms 00:18:33.685 [2024-07-23 18:36:33.719465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.685 [2024-07-23 18:36:33.719811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.685 [2024-07-23 18:36:33.719831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:33.685 [2024-07-23 18:36:33.719844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.272 ms 00:18:33.685 [2024-07-23 18:36:33.719853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.945 [2024-07-23 18:36:33.749985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.945 [2024-07-23 18:36:33.750070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:33.945 [2024-07-23 18:36:33.750089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.137 ms 00:18:33.945 [2024-07-23 18:36:33.750098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.945 [2024-07-23 18:36:33.756508] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:33.945 [2024-07-23 18:36:33.772911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.945 [2024-07-23 18:36:33.772980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:33.945 [2024-07-23 18:36:33.772995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.746 ms 00:18:33.945 [2024-07-23 18:36:33.773006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.945 [2024-07-23 18:36:33.773125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.945 [2024-07-23 18:36:33.773139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:33.945 [2024-07-23 18:36:33.773152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:33.945 [2024-07-23 18:36:33.773163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.945 [2024-07-23 18:36:33.773223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.945 [2024-07-23 18:36:33.773236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:33.945 [2024-07-23 18:36:33.773246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:18:33.945 [2024-07-23 18:36:33.773278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.945 [2024-07-23 18:36:33.773308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.945 [2024-07-23 18:36:33.773320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:33.945 [2024-07-23 18:36:33.773330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:33.945 [2024-07-23 18:36:33.773369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.945 [2024-07-23 18:36:33.773404] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:33.945 [2024-07-23 18:36:33.773418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.945 [2024-07-23 18:36:33.773427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:33.945 [2024-07-23 18:36:33.773437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:33.945 [2024-07-23 18:36:33.773446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.945 [2024-07-23 18:36:33.777501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.945 [2024-07-23 18:36:33.777548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:33.945 [2024-07-23 18:36:33.777563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.021 ms 00:18:33.945 [2024-07-23 18:36:33.777585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.945 [2024-07-23 18:36:33.777718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.945 [2024-07-23 18:36:33.777730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:33.945 [2024-07-23 18:36:33.777742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:18:33.945 [2024-07-23 18:36:33.777750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.945 [2024-07-23 18:36:33.778700] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:33.945 [2024-07-23 18:36:33.779655] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 130.170 ms, result 0 00:18:33.945 [2024-07-23 18:36:33.780499] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:33.945 Some configs were skipped because the RPC state that can call them passed over. 00:18:33.945 18:36:33 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:18:34.207 [2024-07-23 18:36:34.015548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.207 [2024-07-23 18:36:34.015727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:18:34.207 [2024-07-23 18:36:34.015770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.730 ms 00:18:34.207 [2024-07-23 18:36:34.015802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.207 [2024-07-23 18:36:34.015881] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.068 ms, result 0 00:18:34.207 true 00:18:34.207 18:36:34 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:18:34.207 [2024-07-23 18:36:34.198946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.207 [2024-07-23 18:36:34.199080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:18:34.207 [2024-07-23 18:36:34.199125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.329 ms 00:18:34.207 [2024-07-23 18:36:34.199151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.207 [2024-07-23 18:36:34.199227] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.634 ms, result 0 00:18:34.207 true 00:18:34.207 18:36:34 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 89883 00:18:34.207 18:36:34 ftl.ftl_trim -- common/autotest_common.sh@946 -- # '[' -z 89883 ']' 00:18:34.207 18:36:34 ftl.ftl_trim -- common/autotest_common.sh@950 -- # kill -0 89883 00:18:34.207 18:36:34 ftl.ftl_trim -- common/autotest_common.sh@951 -- # uname 00:18:34.207 18:36:34 ftl.ftl_trim -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:34.207 18:36:34 ftl.ftl_trim -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 89883 00:18:34.207 killing process with pid 89883 00:18:34.207 18:36:34 ftl.ftl_trim -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:34.207 18:36:34 ftl.ftl_trim -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:34.207 18:36:34 ftl.ftl_trim -- common/autotest_common.sh@964 -- # echo 'killing process with pid 89883' 00:18:34.207 18:36:34 ftl.ftl_trim -- common/autotest_common.sh@965 -- # kill 89883 00:18:34.207 18:36:34 ftl.ftl_trim -- common/autotest_common.sh@970 -- # wait 89883 00:18:34.475 [2024-07-23 18:36:34.390083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.475 [2024-07-23 18:36:34.390155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:34.475 [2024-07-23 18:36:34.390170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:34.475 [2024-07-23 18:36:34.390181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.475 [2024-07-23 18:36:34.390207] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:34.475 [2024-07-23 18:36:34.390884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.475 [2024-07-23 18:36:34.390906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:34.475 [2024-07-23 18:36:34.390918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.659 ms 00:18:34.475 [2024-07-23 18:36:34.390934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.475 [2024-07-23 18:36:34.391176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.475 [2024-07-23 18:36:34.391188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:34.475 [2024-07-23 18:36:34.391216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.217 ms 00:18:34.475 [2024-07-23 18:36:34.391226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.475 [2024-07-23 18:36:34.394585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.475 [2024-07-23 18:36:34.394627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:34.475 [2024-07-23 18:36:34.394642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.308 ms 00:18:34.475 [2024-07-23 18:36:34.394660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.475 [2024-07-23 18:36:34.400338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.475 [2024-07-23 18:36:34.400379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:34.475 [2024-07-23 18:36:34.400393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.637 ms 00:18:34.475 [2024-07-23 18:36:34.400402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.475 [2024-07-23 18:36:34.401801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.475 [2024-07-23 18:36:34.401842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:34.475 [2024-07-23 18:36:34.401855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.308 ms 00:18:34.475 [2024-07-23 18:36:34.401864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.475 [2024-07-23 18:36:34.406390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.475 [2024-07-23 18:36:34.406430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:34.475 [2024-07-23 18:36:34.406451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.492 ms 00:18:34.475 [2024-07-23 18:36:34.406466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.475 [2024-07-23 18:36:34.406609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.475 [2024-07-23 18:36:34.406622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:34.475 [2024-07-23 18:36:34.406640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:18:34.475 [2024-07-23 18:36:34.406649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.475 [2024-07-23 18:36:34.408915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.475 [2024-07-23 18:36:34.408950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:34.475 [2024-07-23 18:36:34.408965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.239 ms 00:18:34.475 [2024-07-23 18:36:34.408974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.475 [2024-07-23 18:36:34.410651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.475 [2024-07-23 18:36:34.410688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:34.475 [2024-07-23 18:36:34.410705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.629 ms 00:18:34.475 [2024-07-23 18:36:34.410715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.475 [2024-07-23 18:36:34.412045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.476 [2024-07-23 18:36:34.412085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:34.476 [2024-07-23 18:36:34.412103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.262 ms 00:18:34.476 [2024-07-23 18:36:34.412112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.476 [2024-07-23 18:36:34.413296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.476 [2024-07-23 18:36:34.413338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:34.476 [2024-07-23 18:36:34.413355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.110 ms 00:18:34.476 [2024-07-23 18:36:34.413364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.476 [2024-07-23 18:36:34.413400] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:34.476 [2024-07-23 18:36:34.413433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:34.476 [2024-07-23 18:36:34.413450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:34.476 [2024-07-23 18:36:34.413461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:34.476 [2024-07-23 18:36:34.413480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:34.476 [2024-07-23 18:36:34.413491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:34.476 [2024-07-23 18:36:34.413507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:34.476 [2024-07-23 18:36:34.413517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:34.476 [2024-07-23 18:36:34.413533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:34.476 [2024-07-23 18:36:34.413545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:34.476 [2024-07-23 18:36:34.413559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:34.476 [2024-07-23 18:36:34.413585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:34.476 [2024-07-23 18:36:34.413602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:34.476 [2024-07-23 18:36:34.413613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:34.476 [2024-07-23 18:36:34.413628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:34.476 [2024-07-23 18:36:34.413638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:34.476 [2024-07-23 18:36:34.413653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:34.476 [2024-07-23 18:36:34.413663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:34.476 [2024-07-23 18:36:34.413677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:34.476 [2024-07-23 18:36:34.413688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:34.476 [2024-07-23 18:36:34.413701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:34.476 [2024-07-23 18:36:34.413711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:34.476 [2024-07-23 18:36:34.413722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:34.476 [2024-07-23 18:36:34.413732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:34.476 [2024-07-23 18:36:34.413744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:34.476 [2024-07-23 18:36:34.413754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:34.476 [2024-07-23 18:36:34.413766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:34.476 [2024-07-23 18:36:34.413792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:34.476 [2024-07-23 18:36:34.413805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:34.476 [2024-07-23 18:36:34.413814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:34.476 [2024-07-23 18:36:34.413826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:34.476 [2024-07-23 18:36:34.413836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:34.476 [2024-07-23 18:36:34.413850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:34.476 [2024-07-23 18:36:34.413860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:34.476 [2024-07-23 18:36:34.413871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:34.476 [2024-07-23 18:36:34.413881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:34.476 [2024-07-23 18:36:34.413894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:34.476 [2024-07-23 18:36:34.413904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:34.477 [2024-07-23 18:36:34.413915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:34.477 [2024-07-23 18:36:34.413925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:34.477 [2024-07-23 18:36:34.413936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:34.477 [2024-07-23 18:36:34.413946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:34.477 [2024-07-23 18:36:34.413957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:34.477 [2024-07-23 18:36:34.413966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:34.477 [2024-07-23 18:36:34.413980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:34.477 [2024-07-23 18:36:34.413989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:34.477 [2024-07-23 18:36:34.414000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:34.477 [2024-07-23 18:36:34.414010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:34.477 [2024-07-23 18:36:34.414021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:34.477 [2024-07-23 18:36:34.414031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:34.477 [2024-07-23 18:36:34.414042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:34.477 [2024-07-23 18:36:34.414052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:34.477 [2024-07-23 18:36:34.414065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:34.477 [2024-07-23 18:36:34.414074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:34.477 [2024-07-23 18:36:34.414086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:34.477 [2024-07-23 18:36:34.414095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:34.477 [2024-07-23 18:36:34.414107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:34.477 [2024-07-23 18:36:34.414116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:34.477 [2024-07-23 18:36:34.414129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:34.477 [2024-07-23 18:36:34.414139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:34.477 [2024-07-23 18:36:34.414151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:34.477 [2024-07-23 18:36:34.414161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:34.477 [2024-07-23 18:36:34.414172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:34.477 [2024-07-23 18:36:34.414181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:34.477 [2024-07-23 18:36:34.414193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:34.477 [2024-07-23 18:36:34.414203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:34.477 [2024-07-23 18:36:34.414214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:34.477 [2024-07-23 18:36:34.414223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:34.477 [2024-07-23 18:36:34.414237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:34.477 [2024-07-23 18:36:34.414247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:34.477 [2024-07-23 18:36:34.414258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:34.477 [2024-07-23 18:36:34.414268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:34.477 [2024-07-23 18:36:34.414279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:34.477 [2024-07-23 18:36:34.414289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:34.477 [2024-07-23 18:36:34.414300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:34.477 [2024-07-23 18:36:34.414310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:34.477 [2024-07-23 18:36:34.414322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:34.477 [2024-07-23 18:36:34.414331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:34.477 [2024-07-23 18:36:34.414343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:34.477 [2024-07-23 18:36:34.414352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:34.477 [2024-07-23 18:36:34.414363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:34.477 [2024-07-23 18:36:34.414373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:34.477 [2024-07-23 18:36:34.414384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:34.477 [2024-07-23 18:36:34.414394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:34.477 [2024-07-23 18:36:34.414408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:34.477 [2024-07-23 18:36:34.414418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:34.477 [2024-07-23 18:36:34.414429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:34.477 [2024-07-23 18:36:34.414438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:34.477 [2024-07-23 18:36:34.414450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:34.477 [2024-07-23 18:36:34.414459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:34.477 [2024-07-23 18:36:34.414470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:34.477 [2024-07-23 18:36:34.414479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:34.477 [2024-07-23 18:36:34.414492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:34.477 [2024-07-23 18:36:34.414501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:34.477 [2024-07-23 18:36:34.414512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:34.477 [2024-07-23 18:36:34.414522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:34.477 [2024-07-23 18:36:34.414535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:34.477 [2024-07-23 18:36:34.414544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:34.477 [2024-07-23 18:36:34.414555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:34.477 [2024-07-23 18:36:34.414566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:34.478 [2024-07-23 18:36:34.414598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:34.478 [2024-07-23 18:36:34.414615] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:34.478 [2024-07-23 18:36:34.414626] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: af89930e-19ee-46e5-a594-fa546b86b4cb 00:18:34.478 [2024-07-23 18:36:34.414637] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:34.478 [2024-07-23 18:36:34.414648] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:34.478 [2024-07-23 18:36:34.414660] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:34.478 [2024-07-23 18:36:34.414671] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:34.478 [2024-07-23 18:36:34.414681] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:34.478 [2024-07-23 18:36:34.414692] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:34.478 [2024-07-23 18:36:34.414701] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:34.478 [2024-07-23 18:36:34.414713] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:34.478 [2024-07-23 18:36:34.414722] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:34.478 [2024-07-23 18:36:34.414747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.478 [2024-07-23 18:36:34.414756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:34.478 [2024-07-23 18:36:34.414768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.355 ms 00:18:34.478 [2024-07-23 18:36:34.414776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.478 [2024-07-23 18:36:34.416582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.478 [2024-07-23 18:36:34.416619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:34.478 [2024-07-23 18:36:34.416632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.783 ms 00:18:34.478 [2024-07-23 18:36:34.416641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.478 [2024-07-23 18:36:34.416759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.478 [2024-07-23 18:36:34.416770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:34.478 [2024-07-23 18:36:34.416781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:18:34.478 [2024-07-23 18:36:34.416790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.478 [2024-07-23 18:36:34.423296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:34.478 [2024-07-23 18:36:34.423322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:34.478 [2024-07-23 18:36:34.423336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:34.478 [2024-07-23 18:36:34.423348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.478 [2024-07-23 18:36:34.423430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:34.478 [2024-07-23 18:36:34.423442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:34.478 [2024-07-23 18:36:34.423464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:34.478 [2024-07-23 18:36:34.423474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.478 [2024-07-23 18:36:34.423534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:34.478 [2024-07-23 18:36:34.423547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:34.478 [2024-07-23 18:36:34.423558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:34.478 [2024-07-23 18:36:34.423602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.478 [2024-07-23 18:36:34.423630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:34.478 [2024-07-23 18:36:34.423640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:34.478 [2024-07-23 18:36:34.423651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:34.478 [2024-07-23 18:36:34.423660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.478 [2024-07-23 18:36:34.437528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:34.478 [2024-07-23 18:36:34.437588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:34.478 [2024-07-23 18:36:34.437603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:34.478 [2024-07-23 18:36:34.437614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.478 [2024-07-23 18:36:34.446184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:34.478 [2024-07-23 18:36:34.446232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:34.478 [2024-07-23 18:36:34.446247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:34.478 [2024-07-23 18:36:34.446257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.478 [2024-07-23 18:36:34.446316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:34.478 [2024-07-23 18:36:34.446330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:34.478 [2024-07-23 18:36:34.446353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:34.478 [2024-07-23 18:36:34.446363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.478 [2024-07-23 18:36:34.446400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:34.478 [2024-07-23 18:36:34.446411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:34.478 [2024-07-23 18:36:34.446423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:34.478 [2024-07-23 18:36:34.446446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.478 [2024-07-23 18:36:34.446525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:34.478 [2024-07-23 18:36:34.446539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:34.479 [2024-07-23 18:36:34.446554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:34.479 [2024-07-23 18:36:34.446563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.479 [2024-07-23 18:36:34.446621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:34.479 [2024-07-23 18:36:34.446633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:34.479 [2024-07-23 18:36:34.446652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:34.479 [2024-07-23 18:36:34.446662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.479 [2024-07-23 18:36:34.446711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:34.479 [2024-07-23 18:36:34.446722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:34.479 [2024-07-23 18:36:34.446733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:34.479 [2024-07-23 18:36:34.446744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.479 [2024-07-23 18:36:34.446806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:34.479 [2024-07-23 18:36:34.446819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:34.479 [2024-07-23 18:36:34.446831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:34.479 [2024-07-23 18:36:34.446841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.479 [2024-07-23 18:36:34.446986] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 56.985 ms, result 0 00:18:34.739 18:36:34 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:18:34.739 18:36:34 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:34.739 [2024-07-23 18:36:34.777326] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:18:34.739 [2024-07-23 18:36:34.777465] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89919 ] 00:18:34.999 [2024-07-23 18:36:34.925044] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:34.999 [2024-07-23 18:36:34.970889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:35.259 [2024-07-23 18:36:35.073246] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:35.260 [2024-07-23 18:36:35.073328] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:35.260 [2024-07-23 18:36:35.220539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.260 [2024-07-23 18:36:35.220611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:35.260 [2024-07-23 18:36:35.220635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:35.260 [2024-07-23 18:36:35.220645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.260 [2024-07-23 18:36:35.222735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.260 [2024-07-23 18:36:35.222793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:35.260 [2024-07-23 18:36:35.222805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.070 ms 00:18:35.260 [2024-07-23 18:36:35.222814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.260 [2024-07-23 18:36:35.222889] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:35.260 [2024-07-23 18:36:35.223095] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:35.260 [2024-07-23 18:36:35.223111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.260 [2024-07-23 18:36:35.223120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:35.260 [2024-07-23 18:36:35.223130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.230 ms 00:18:35.260 [2024-07-23 18:36:35.223153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.260 [2024-07-23 18:36:35.224729] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:35.260 [2024-07-23 18:36:35.227318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.260 [2024-07-23 18:36:35.227358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:35.260 [2024-07-23 18:36:35.227370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.596 ms 00:18:35.260 [2024-07-23 18:36:35.227380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.260 [2024-07-23 18:36:35.227460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.260 [2024-07-23 18:36:35.227479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:35.260 [2024-07-23 18:36:35.227498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:18:35.260 [2024-07-23 18:36:35.227510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.260 [2024-07-23 18:36:35.234357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.260 [2024-07-23 18:36:35.234385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:35.260 [2024-07-23 18:36:35.234396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.806 ms 00:18:35.260 [2024-07-23 18:36:35.234405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.260 [2024-07-23 18:36:35.234515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.260 [2024-07-23 18:36:35.234531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:35.260 [2024-07-23 18:36:35.234549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:18:35.260 [2024-07-23 18:36:35.234584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.260 [2024-07-23 18:36:35.234625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.260 [2024-07-23 18:36:35.234638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:35.260 [2024-07-23 18:36:35.234648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:18:35.260 [2024-07-23 18:36:35.234657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.260 [2024-07-23 18:36:35.234683] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:35.260 [2024-07-23 18:36:35.236365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.260 [2024-07-23 18:36:35.236396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:35.260 [2024-07-23 18:36:35.236412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.693 ms 00:18:35.260 [2024-07-23 18:36:35.236421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.260 [2024-07-23 18:36:35.236473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.260 [2024-07-23 18:36:35.236484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:35.260 [2024-07-23 18:36:35.236494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:35.260 [2024-07-23 18:36:35.236514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.260 [2024-07-23 18:36:35.236546] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:35.260 [2024-07-23 18:36:35.236586] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:35.260 [2024-07-23 18:36:35.236632] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:35.260 [2024-07-23 18:36:35.236658] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:18:35.260 [2024-07-23 18:36:35.236743] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:35.260 [2024-07-23 18:36:35.236756] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:35.260 [2024-07-23 18:36:35.236776] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:18:35.260 [2024-07-23 18:36:35.236795] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:35.260 [2024-07-23 18:36:35.236806] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:35.260 [2024-07-23 18:36:35.236815] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:35.260 [2024-07-23 18:36:35.236823] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:35.260 [2024-07-23 18:36:35.236833] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:35.260 [2024-07-23 18:36:35.236846] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:35.260 [2024-07-23 18:36:35.236855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.260 [2024-07-23 18:36:35.236864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:35.260 [2024-07-23 18:36:35.236873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.313 ms 00:18:35.260 [2024-07-23 18:36:35.236882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.260 [2024-07-23 18:36:35.236970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.260 [2024-07-23 18:36:35.236981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:35.260 [2024-07-23 18:36:35.236991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:18:35.260 [2024-07-23 18:36:35.236999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.260 [2024-07-23 18:36:35.237088] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:35.260 [2024-07-23 18:36:35.237100] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:35.260 [2024-07-23 18:36:35.237119] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:35.260 [2024-07-23 18:36:35.237136] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:35.260 [2024-07-23 18:36:35.237146] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:35.260 [2024-07-23 18:36:35.237154] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:35.260 [2024-07-23 18:36:35.237163] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:35.260 [2024-07-23 18:36:35.237173] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:35.260 [2024-07-23 18:36:35.237182] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:35.260 [2024-07-23 18:36:35.237190] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:35.260 [2024-07-23 18:36:35.237199] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:35.260 [2024-07-23 18:36:35.237207] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:35.260 [2024-07-23 18:36:35.237218] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:35.260 [2024-07-23 18:36:35.237227] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:35.260 [2024-07-23 18:36:35.237235] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:35.260 [2024-07-23 18:36:35.237243] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:35.260 [2024-07-23 18:36:35.237251] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:35.260 [2024-07-23 18:36:35.237260] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:35.260 [2024-07-23 18:36:35.237268] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:35.260 [2024-07-23 18:36:35.237276] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:35.260 [2024-07-23 18:36:35.237284] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:35.260 [2024-07-23 18:36:35.237291] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:35.260 [2024-07-23 18:36:35.237299] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:35.260 [2024-07-23 18:36:35.237306] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:35.260 [2024-07-23 18:36:35.237314] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:35.260 [2024-07-23 18:36:35.237322] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:35.260 [2024-07-23 18:36:35.237330] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:35.260 [2024-07-23 18:36:35.237338] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:35.260 [2024-07-23 18:36:35.237351] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:35.260 [2024-07-23 18:36:35.237360] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:35.260 [2024-07-23 18:36:35.237368] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:35.260 [2024-07-23 18:36:35.237375] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:35.260 [2024-07-23 18:36:35.237383] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:35.260 [2024-07-23 18:36:35.237390] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:35.260 [2024-07-23 18:36:35.237398] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:35.260 [2024-07-23 18:36:35.237405] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:35.260 [2024-07-23 18:36:35.237413] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:35.260 [2024-07-23 18:36:35.237422] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:35.261 [2024-07-23 18:36:35.237431] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:35.261 [2024-07-23 18:36:35.237438] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:35.261 [2024-07-23 18:36:35.237446] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:35.261 [2024-07-23 18:36:35.237454] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:35.261 [2024-07-23 18:36:35.237463] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:35.261 [2024-07-23 18:36:35.237470] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:35.261 [2024-07-23 18:36:35.237482] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:35.261 [2024-07-23 18:36:35.237491] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:35.261 [2024-07-23 18:36:35.237499] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:35.261 [2024-07-23 18:36:35.237509] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:35.261 [2024-07-23 18:36:35.237518] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:35.261 [2024-07-23 18:36:35.237526] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:35.261 [2024-07-23 18:36:35.237534] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:35.261 [2024-07-23 18:36:35.237542] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:35.261 [2024-07-23 18:36:35.237549] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:35.261 [2024-07-23 18:36:35.237558] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:35.261 [2024-07-23 18:36:35.237656] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:35.261 [2024-07-23 18:36:35.237702] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:35.261 [2024-07-23 18:36:35.237738] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:35.261 [2024-07-23 18:36:35.237772] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:35.261 [2024-07-23 18:36:35.237807] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:35.261 [2024-07-23 18:36:35.237914] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:35.261 [2024-07-23 18:36:35.238003] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:35.261 [2024-07-23 18:36:35.238050] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:35.261 [2024-07-23 18:36:35.238094] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:35.261 [2024-07-23 18:36:35.238150] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:35.261 [2024-07-23 18:36:35.238195] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:35.261 [2024-07-23 18:36:35.238242] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:35.261 [2024-07-23 18:36:35.238291] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:35.261 [2024-07-23 18:36:35.238342] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:35.261 [2024-07-23 18:36:35.238385] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:35.261 [2024-07-23 18:36:35.238421] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:35.261 [2024-07-23 18:36:35.238483] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:35.261 [2024-07-23 18:36:35.238549] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:35.261 [2024-07-23 18:36:35.238638] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:35.261 [2024-07-23 18:36:35.238697] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:35.261 [2024-07-23 18:36:35.238745] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:35.261 [2024-07-23 18:36:35.238811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.261 [2024-07-23 18:36:35.238849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:35.261 [2024-07-23 18:36:35.238883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.776 ms 00:18:35.261 [2024-07-23 18:36:35.238920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.261 [2024-07-23 18:36:35.258981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.261 [2024-07-23 18:36:35.259104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:35.261 [2024-07-23 18:36:35.259142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.978 ms 00:18:35.261 [2024-07-23 18:36:35.259173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.261 [2024-07-23 18:36:35.259353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.261 [2024-07-23 18:36:35.259402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:35.261 [2024-07-23 18:36:35.259460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:18:35.261 [2024-07-23 18:36:35.259499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.261 [2024-07-23 18:36:35.270039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.261 [2024-07-23 18:36:35.270162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:35.261 [2024-07-23 18:36:35.270184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.501 ms 00:18:35.261 [2024-07-23 18:36:35.270203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.261 [2024-07-23 18:36:35.270297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.261 [2024-07-23 18:36:35.270328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:35.261 [2024-07-23 18:36:35.270343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:35.261 [2024-07-23 18:36:35.270355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.261 [2024-07-23 18:36:35.270887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.261 [2024-07-23 18:36:35.270916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:35.261 [2024-07-23 18:36:35.270940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.499 ms 00:18:35.261 [2024-07-23 18:36:35.270953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.261 [2024-07-23 18:36:35.271102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.261 [2024-07-23 18:36:35.271133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:35.261 [2024-07-23 18:36:35.271147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:18:35.261 [2024-07-23 18:36:35.271160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.261 [2024-07-23 18:36:35.277724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.261 [2024-07-23 18:36:35.277812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:35.261 [2024-07-23 18:36:35.277848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.544 ms 00:18:35.261 [2024-07-23 18:36:35.277876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.261 [2024-07-23 18:36:35.280663] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:18:35.261 [2024-07-23 18:36:35.280783] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:35.261 [2024-07-23 18:36:35.280833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.261 [2024-07-23 18:36:35.280865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:35.261 [2024-07-23 18:36:35.280916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.805 ms 00:18:35.261 [2024-07-23 18:36:35.280943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.261 [2024-07-23 18:36:35.294356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.261 [2024-07-23 18:36:35.294442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:35.261 [2024-07-23 18:36:35.294494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.345 ms 00:18:35.261 [2024-07-23 18:36:35.294528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.261 [2024-07-23 18:36:35.296569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.261 [2024-07-23 18:36:35.296666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:35.261 [2024-07-23 18:36:35.296698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.934 ms 00:18:35.261 [2024-07-23 18:36:35.296722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.261 [2024-07-23 18:36:35.298259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.261 [2024-07-23 18:36:35.298336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:35.261 [2024-07-23 18:36:35.298369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.461 ms 00:18:35.261 [2024-07-23 18:36:35.298394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.261 [2024-07-23 18:36:35.298760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.261 [2024-07-23 18:36:35.298836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:35.261 [2024-07-23 18:36:35.298874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.256 ms 00:18:35.261 [2024-07-23 18:36:35.298905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.521 [2024-07-23 18:36:35.321911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.521 [2024-07-23 18:36:35.322063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:35.521 [2024-07-23 18:36:35.322099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.980 ms 00:18:35.521 [2024-07-23 18:36:35.322124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.521 [2024-07-23 18:36:35.328417] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:35.521 [2024-07-23 18:36:35.345157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.521 [2024-07-23 18:36:35.345271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:35.521 [2024-07-23 18:36:35.345306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.952 ms 00:18:35.521 [2024-07-23 18:36:35.345350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.521 [2024-07-23 18:36:35.345501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.521 [2024-07-23 18:36:35.345547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:35.521 [2024-07-23 18:36:35.345613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:35.521 [2024-07-23 18:36:35.345653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.521 [2024-07-23 18:36:35.345743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.521 [2024-07-23 18:36:35.345784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:35.521 [2024-07-23 18:36:35.345819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:18:35.521 [2024-07-23 18:36:35.345855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.521 [2024-07-23 18:36:35.345909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.521 [2024-07-23 18:36:35.345947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:35.521 [2024-07-23 18:36:35.346008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:35.521 [2024-07-23 18:36:35.346044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.521 [2024-07-23 18:36:35.346112] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:35.521 [2024-07-23 18:36:35.346150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.521 [2024-07-23 18:36:35.346185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:35.521 [2024-07-23 18:36:35.346219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:18:35.521 [2024-07-23 18:36:35.346253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.521 [2024-07-23 18:36:35.350243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.521 [2024-07-23 18:36:35.350332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:35.521 [2024-07-23 18:36:35.350388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.947 ms 00:18:35.521 [2024-07-23 18:36:35.350421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.521 [2024-07-23 18:36:35.350525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.521 [2024-07-23 18:36:35.350579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:35.521 [2024-07-23 18:36:35.350625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:18:35.521 [2024-07-23 18:36:35.350657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.521 [2024-07-23 18:36:35.351749] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:35.521 [2024-07-23 18:36:35.352829] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 131.113 ms, result 0 00:18:35.521 [2024-07-23 18:36:35.353792] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:35.521 [2024-07-23 18:36:35.361946] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:44.411  Copying: 32/256 [MB] (32 MBps) Copying: 62/256 [MB] (30 MBps) Copying: 92/256 [MB] (30 MBps) Copying: 122/256 [MB] (30 MBps) Copying: 152/256 [MB] (29 MBps) Copying: 181/256 [MB] (28 MBps) Copying: 208/256 [MB] (26 MBps) Copying: 234/256 [MB] (26 MBps) Copying: 256/256 [MB] (average 29 MBps)[2024-07-23 18:36:44.163887] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:44.411 [2024-07-23 18:36:44.165416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.411 [2024-07-23 18:36:44.165491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:44.411 [2024-07-23 18:36:44.165535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:44.411 [2024-07-23 18:36:44.165562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.411 [2024-07-23 18:36:44.165639] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:44.411 [2024-07-23 18:36:44.166339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.411 [2024-07-23 18:36:44.166386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:44.411 [2024-07-23 18:36:44.166417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.647 ms 00:18:44.411 [2024-07-23 18:36:44.166444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.411 [2024-07-23 18:36:44.166698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.411 [2024-07-23 18:36:44.166743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:44.411 [2024-07-23 18:36:44.166803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.204 ms 00:18:44.411 [2024-07-23 18:36:44.166827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.411 [2024-07-23 18:36:44.169548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.411 [2024-07-23 18:36:44.169638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:44.411 [2024-07-23 18:36:44.169668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.690 ms 00:18:44.411 [2024-07-23 18:36:44.169692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.411 [2024-07-23 18:36:44.175007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.411 [2024-07-23 18:36:44.175094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:44.411 [2024-07-23 18:36:44.175124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.278 ms 00:18:44.411 [2024-07-23 18:36:44.175154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.411 [2024-07-23 18:36:44.176682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.411 [2024-07-23 18:36:44.176771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:44.411 [2024-07-23 18:36:44.176802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.479 ms 00:18:44.411 [2024-07-23 18:36:44.176825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.411 [2024-07-23 18:36:44.181640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.411 [2024-07-23 18:36:44.181728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:44.411 [2024-07-23 18:36:44.181759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.775 ms 00:18:44.411 [2024-07-23 18:36:44.181782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.411 [2024-07-23 18:36:44.181926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.411 [2024-07-23 18:36:44.181966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:44.411 [2024-07-23 18:36:44.182003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:18:44.411 [2024-07-23 18:36:44.182029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.411 [2024-07-23 18:36:44.184657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.411 [2024-07-23 18:36:44.184730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:44.411 [2024-07-23 18:36:44.184765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.584 ms 00:18:44.411 [2024-07-23 18:36:44.184804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.411 [2024-07-23 18:36:44.186518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.411 [2024-07-23 18:36:44.186620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:44.411 [2024-07-23 18:36:44.186635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.658 ms 00:18:44.411 [2024-07-23 18:36:44.186643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.411 [2024-07-23 18:36:44.187955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.411 [2024-07-23 18:36:44.187992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:44.411 [2024-07-23 18:36:44.188003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.281 ms 00:18:44.411 [2024-07-23 18:36:44.188011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.411 [2024-07-23 18:36:44.189214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.411 [2024-07-23 18:36:44.189259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:44.411 [2024-07-23 18:36:44.189270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.149 ms 00:18:44.411 [2024-07-23 18:36:44.189278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.411 [2024-07-23 18:36:44.189305] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:44.411 [2024-07-23 18:36:44.189321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.189334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.189343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.189352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.189361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.189370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.189380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.189389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.189398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.189407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.189415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.189424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.189433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.189442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.189450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.189460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.189468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.189477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.189485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.189494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.189502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.189510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.189519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.189528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.189538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.189547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.189558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.189582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.189593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.189602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.189610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.189621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.189630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.189639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.189648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.189657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.189665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.189674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.189682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.189691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.189699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.189708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.189734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.189742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.189751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.189760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.189768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.189777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.189786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.189794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.189804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.189812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.189820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.189829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.189837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.189847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.189856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.189865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.189875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.189883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.189892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.189901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.189909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.189918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.189927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.189935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.189943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.189952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.189962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.189970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.189979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.189987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.189996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.190004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.190012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.190020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.190028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.190037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.190047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.190055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.190064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.190072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.190080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.190089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.190098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.190106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.190115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.190124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.190135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.190143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.190151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.190160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.190168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.190177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.190186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.190196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.190205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.190225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.190233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.190242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:44.411 [2024-07-23 18:36:44.190257] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:44.411 [2024-07-23 18:36:44.190276] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: af89930e-19ee-46e5-a594-fa546b86b4cb 00:18:44.411 [2024-07-23 18:36:44.190284] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:44.411 [2024-07-23 18:36:44.190293] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:44.411 [2024-07-23 18:36:44.190301] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:44.411 [2024-07-23 18:36:44.190310] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:44.411 [2024-07-23 18:36:44.190317] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:44.411 [2024-07-23 18:36:44.190330] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:44.411 [2024-07-23 18:36:44.190339] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:44.412 [2024-07-23 18:36:44.190357] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:44.412 [2024-07-23 18:36:44.190364] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:44.412 [2024-07-23 18:36:44.190373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.412 [2024-07-23 18:36:44.190381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:44.412 [2024-07-23 18:36:44.190389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.070 ms 00:18:44.412 [2024-07-23 18:36:44.190400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.412 [2024-07-23 18:36:44.192303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.412 [2024-07-23 18:36:44.192326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:44.412 [2024-07-23 18:36:44.192337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.876 ms 00:18:44.412 [2024-07-23 18:36:44.192350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.412 [2024-07-23 18:36:44.192468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.412 [2024-07-23 18:36:44.192479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:44.412 [2024-07-23 18:36:44.192489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:18:44.412 [2024-07-23 18:36:44.192497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.412 [2024-07-23 18:36:44.198259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:44.412 [2024-07-23 18:36:44.198284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:44.412 [2024-07-23 18:36:44.198300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:44.412 [2024-07-23 18:36:44.198308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.412 [2024-07-23 18:36:44.198352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:44.412 [2024-07-23 18:36:44.198361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:44.412 [2024-07-23 18:36:44.198381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:44.412 [2024-07-23 18:36:44.198396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.412 [2024-07-23 18:36:44.198440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:44.412 [2024-07-23 18:36:44.198452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:44.412 [2024-07-23 18:36:44.198460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:44.412 [2024-07-23 18:36:44.198468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.412 [2024-07-23 18:36:44.198490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:44.412 [2024-07-23 18:36:44.198499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:44.412 [2024-07-23 18:36:44.198508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:44.412 [2024-07-23 18:36:44.198516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.412 [2024-07-23 18:36:44.212039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:44.412 [2024-07-23 18:36:44.212090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:44.412 [2024-07-23 18:36:44.212103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:44.412 [2024-07-23 18:36:44.212121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.412 [2024-07-23 18:36:44.220459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:44.412 [2024-07-23 18:36:44.220514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:44.412 [2024-07-23 18:36:44.220526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:44.412 [2024-07-23 18:36:44.220535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.412 [2024-07-23 18:36:44.220563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:44.412 [2024-07-23 18:36:44.220585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:44.412 [2024-07-23 18:36:44.220594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:44.412 [2024-07-23 18:36:44.220602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.412 [2024-07-23 18:36:44.220635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:44.412 [2024-07-23 18:36:44.220645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:44.412 [2024-07-23 18:36:44.220663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:44.412 [2024-07-23 18:36:44.220679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.412 [2024-07-23 18:36:44.220775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:44.412 [2024-07-23 18:36:44.220788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:44.412 [2024-07-23 18:36:44.220797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:44.412 [2024-07-23 18:36:44.220806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.412 [2024-07-23 18:36:44.220849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:44.412 [2024-07-23 18:36:44.220864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:44.412 [2024-07-23 18:36:44.220874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:44.412 [2024-07-23 18:36:44.220890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.412 [2024-07-23 18:36:44.220930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:44.412 [2024-07-23 18:36:44.220940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:44.412 [2024-07-23 18:36:44.220950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:44.412 [2024-07-23 18:36:44.220959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.412 [2024-07-23 18:36:44.221015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:44.412 [2024-07-23 18:36:44.221028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:44.412 [2024-07-23 18:36:44.221036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:44.412 [2024-07-23 18:36:44.221056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.412 [2024-07-23 18:36:44.221217] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 55.861 ms, result 0 00:18:44.412 00:18:44.412 00:18:44.672 18:36:44 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:18:44.672 18:36:44 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:18:44.932 18:36:44 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:45.191 [2024-07-23 18:36:45.013774] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:18:45.191 [2024-07-23 18:36:45.013903] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90029 ] 00:18:45.191 [2024-07-23 18:36:45.159622] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:45.191 [2024-07-23 18:36:45.204127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:45.453 [2024-07-23 18:36:45.306460] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:45.453 [2024-07-23 18:36:45.306541] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:45.453 [2024-07-23 18:36:45.453032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.453 [2024-07-23 18:36:45.453089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:45.453 [2024-07-23 18:36:45.453104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:45.453 [2024-07-23 18:36:45.453120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.453 [2024-07-23 18:36:45.455099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.453 [2024-07-23 18:36:45.455142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:45.453 [2024-07-23 18:36:45.455153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.962 ms 00:18:45.453 [2024-07-23 18:36:45.455162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.453 [2024-07-23 18:36:45.455243] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:45.453 [2024-07-23 18:36:45.455459] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:45.453 [2024-07-23 18:36:45.455486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.453 [2024-07-23 18:36:45.455495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:45.453 [2024-07-23 18:36:45.455505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.251 ms 00:18:45.453 [2024-07-23 18:36:45.455527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.453 [2024-07-23 18:36:45.457080] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:45.453 [2024-07-23 18:36:45.459681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.453 [2024-07-23 18:36:45.459720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:45.453 [2024-07-23 18:36:45.459731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.607 ms 00:18:45.453 [2024-07-23 18:36:45.459740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.453 [2024-07-23 18:36:45.459816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.453 [2024-07-23 18:36:45.459827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:45.453 [2024-07-23 18:36:45.459836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:18:45.453 [2024-07-23 18:36:45.459847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.453 [2024-07-23 18:36:45.466707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.453 [2024-07-23 18:36:45.466736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:45.453 [2024-07-23 18:36:45.466746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.832 ms 00:18:45.453 [2024-07-23 18:36:45.466755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.453 [2024-07-23 18:36:45.466855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.453 [2024-07-23 18:36:45.466871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:45.453 [2024-07-23 18:36:45.466881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:18:45.453 [2024-07-23 18:36:45.466893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.453 [2024-07-23 18:36:45.466942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.453 [2024-07-23 18:36:45.466956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:45.453 [2024-07-23 18:36:45.466965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:18:45.453 [2024-07-23 18:36:45.466982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.453 [2024-07-23 18:36:45.467020] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:45.453 [2024-07-23 18:36:45.468715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.453 [2024-07-23 18:36:45.468746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:45.453 [2024-07-23 18:36:45.468761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.711 ms 00:18:45.453 [2024-07-23 18:36:45.468770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.453 [2024-07-23 18:36:45.468823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.453 [2024-07-23 18:36:45.468834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:45.453 [2024-07-23 18:36:45.468844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:45.453 [2024-07-23 18:36:45.468852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.453 [2024-07-23 18:36:45.468872] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:45.453 [2024-07-23 18:36:45.468896] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:45.453 [2024-07-23 18:36:45.468951] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:45.453 [2024-07-23 18:36:45.468972] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:18:45.453 [2024-07-23 18:36:45.469062] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:45.453 [2024-07-23 18:36:45.469074] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:45.453 [2024-07-23 18:36:45.469095] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:18:45.453 [2024-07-23 18:36:45.469106] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:45.453 [2024-07-23 18:36:45.469116] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:45.453 [2024-07-23 18:36:45.469126] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:45.453 [2024-07-23 18:36:45.469144] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:45.453 [2024-07-23 18:36:45.469152] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:45.453 [2024-07-23 18:36:45.469165] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:45.453 [2024-07-23 18:36:45.469188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.453 [2024-07-23 18:36:45.469198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:45.453 [2024-07-23 18:36:45.469207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:18:45.453 [2024-07-23 18:36:45.469215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.453 [2024-07-23 18:36:45.469292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.453 [2024-07-23 18:36:45.469304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:45.453 [2024-07-23 18:36:45.469314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:18:45.453 [2024-07-23 18:36:45.469322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.453 [2024-07-23 18:36:45.469423] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:45.453 [2024-07-23 18:36:45.469437] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:45.453 [2024-07-23 18:36:45.469454] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:45.453 [2024-07-23 18:36:45.469463] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:45.453 [2024-07-23 18:36:45.469471] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:45.453 [2024-07-23 18:36:45.469480] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:45.453 [2024-07-23 18:36:45.469488] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:45.453 [2024-07-23 18:36:45.469497] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:45.453 [2024-07-23 18:36:45.469505] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:45.453 [2024-07-23 18:36:45.469513] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:45.453 [2024-07-23 18:36:45.469521] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:45.453 [2024-07-23 18:36:45.469530] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:45.453 [2024-07-23 18:36:45.469540] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:45.453 [2024-07-23 18:36:45.469548] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:45.453 [2024-07-23 18:36:45.469556] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:45.453 [2024-07-23 18:36:45.469564] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:45.453 [2024-07-23 18:36:45.469571] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:45.453 [2024-07-23 18:36:45.469578] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:45.453 [2024-07-23 18:36:45.469586] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:45.453 [2024-07-23 18:36:45.469610] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:45.453 [2024-07-23 18:36:45.469618] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:45.453 [2024-07-23 18:36:45.469626] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:45.453 [2024-07-23 18:36:45.469634] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:45.453 [2024-07-23 18:36:45.469642] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:45.453 [2024-07-23 18:36:45.469649] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:45.453 [2024-07-23 18:36:45.469657] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:45.453 [2024-07-23 18:36:45.469664] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:45.453 [2024-07-23 18:36:45.469672] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:45.453 [2024-07-23 18:36:45.469684] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:45.453 [2024-07-23 18:36:45.469692] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:45.453 [2024-07-23 18:36:45.469699] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:45.453 [2024-07-23 18:36:45.469707] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:45.453 [2024-07-23 18:36:45.469715] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:45.453 [2024-07-23 18:36:45.469721] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:45.453 [2024-07-23 18:36:45.469729] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:45.453 [2024-07-23 18:36:45.469736] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:45.453 [2024-07-23 18:36:45.469743] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:45.454 [2024-07-23 18:36:45.469751] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:45.454 [2024-07-23 18:36:45.469759] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:45.454 [2024-07-23 18:36:45.469767] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:45.454 [2024-07-23 18:36:45.469775] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:45.454 [2024-07-23 18:36:45.469784] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:45.454 [2024-07-23 18:36:45.469794] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:45.454 [2024-07-23 18:36:45.469801] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:45.454 [2024-07-23 18:36:45.469821] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:45.454 [2024-07-23 18:36:45.469829] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:45.454 [2024-07-23 18:36:45.469837] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:45.454 [2024-07-23 18:36:45.469853] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:45.454 [2024-07-23 18:36:45.469861] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:45.454 [2024-07-23 18:36:45.469869] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:45.454 [2024-07-23 18:36:45.469877] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:45.454 [2024-07-23 18:36:45.469885] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:45.454 [2024-07-23 18:36:45.469892] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:45.454 [2024-07-23 18:36:45.469901] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:45.454 [2024-07-23 18:36:45.469910] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:45.454 [2024-07-23 18:36:45.469919] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:45.454 [2024-07-23 18:36:45.469926] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:45.454 [2024-07-23 18:36:45.469936] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:45.454 [2024-07-23 18:36:45.469945] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:45.454 [2024-07-23 18:36:45.469953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:45.454 [2024-07-23 18:36:45.469963] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:45.454 [2024-07-23 18:36:45.469988] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:45.454 [2024-07-23 18:36:45.469997] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:45.454 [2024-07-23 18:36:45.470005] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:45.454 [2024-07-23 18:36:45.470014] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:45.454 [2024-07-23 18:36:45.470022] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:45.454 [2024-07-23 18:36:45.470030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:45.454 [2024-07-23 18:36:45.470037] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:45.454 [2024-07-23 18:36:45.470046] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:45.454 [2024-07-23 18:36:45.470054] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:45.454 [2024-07-23 18:36:45.470066] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:45.454 [2024-07-23 18:36:45.470087] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:45.454 [2024-07-23 18:36:45.470096] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:45.454 [2024-07-23 18:36:45.470106] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:45.454 [2024-07-23 18:36:45.470117] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:45.454 [2024-07-23 18:36:45.470127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.454 [2024-07-23 18:36:45.470138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:45.454 [2024-07-23 18:36:45.470163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.752 ms 00:18:45.454 [2024-07-23 18:36:45.470173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.454 [2024-07-23 18:36:45.491112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.454 [2024-07-23 18:36:45.491174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:45.454 [2024-07-23 18:36:45.491200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.919 ms 00:18:45.454 [2024-07-23 18:36:45.491234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.454 [2024-07-23 18:36:45.491378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.454 [2024-07-23 18:36:45.491392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:45.454 [2024-07-23 18:36:45.491404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:18:45.454 [2024-07-23 18:36:45.491413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.454 [2024-07-23 18:36:45.501257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.454 [2024-07-23 18:36:45.501296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:45.454 [2024-07-23 18:36:45.501308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.835 ms 00:18:45.454 [2024-07-23 18:36:45.501322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.454 [2024-07-23 18:36:45.501389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.454 [2024-07-23 18:36:45.501400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:45.454 [2024-07-23 18:36:45.501422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:45.454 [2024-07-23 18:36:45.501431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.454 [2024-07-23 18:36:45.501901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.454 [2024-07-23 18:36:45.501920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:45.454 [2024-07-23 18:36:45.501931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.446 ms 00:18:45.454 [2024-07-23 18:36:45.501940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.454 [2024-07-23 18:36:45.502061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.454 [2024-07-23 18:36:45.502085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:45.454 [2024-07-23 18:36:45.502096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:18:45.454 [2024-07-23 18:36:45.502105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.715 [2024-07-23 18:36:45.508550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.715 [2024-07-23 18:36:45.508597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:45.715 [2024-07-23 18:36:45.508609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.430 ms 00:18:45.715 [2024-07-23 18:36:45.508618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.715 [2024-07-23 18:36:45.511308] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:18:45.715 [2024-07-23 18:36:45.511346] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:45.715 [2024-07-23 18:36:45.511359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.715 [2024-07-23 18:36:45.511371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:45.715 [2024-07-23 18:36:45.511380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.637 ms 00:18:45.715 [2024-07-23 18:36:45.511389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.715 [2024-07-23 18:36:45.523748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.715 [2024-07-23 18:36:45.523788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:45.715 [2024-07-23 18:36:45.523801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.320 ms 00:18:45.715 [2024-07-23 18:36:45.523815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.715 [2024-07-23 18:36:45.525715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.715 [2024-07-23 18:36:45.525752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:45.715 [2024-07-23 18:36:45.525762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.817 ms 00:18:45.715 [2024-07-23 18:36:45.525771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.715 [2024-07-23 18:36:45.527380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.715 [2024-07-23 18:36:45.527417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:45.715 [2024-07-23 18:36:45.527428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.566 ms 00:18:45.715 [2024-07-23 18:36:45.527437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.715 [2024-07-23 18:36:45.527753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.715 [2024-07-23 18:36:45.527775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:45.715 [2024-07-23 18:36:45.527786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.240 ms 00:18:45.715 [2024-07-23 18:36:45.527794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.715 [2024-07-23 18:36:45.550517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.715 [2024-07-23 18:36:45.550586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:45.715 [2024-07-23 18:36:45.550613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.736 ms 00:18:45.715 [2024-07-23 18:36:45.550624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.715 [2024-07-23 18:36:45.556752] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:45.715 [2024-07-23 18:36:45.573879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.715 [2024-07-23 18:36:45.573937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:45.715 [2024-07-23 18:36:45.573952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.203 ms 00:18:45.715 [2024-07-23 18:36:45.573962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.715 [2024-07-23 18:36:45.574079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.715 [2024-07-23 18:36:45.574092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:45.715 [2024-07-23 18:36:45.574108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:45.715 [2024-07-23 18:36:45.574117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.715 [2024-07-23 18:36:45.574180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.715 [2024-07-23 18:36:45.574191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:45.715 [2024-07-23 18:36:45.574200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:18:45.715 [2024-07-23 18:36:45.574209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.715 [2024-07-23 18:36:45.574234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.715 [2024-07-23 18:36:45.574243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:45.715 [2024-07-23 18:36:45.574270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:45.715 [2024-07-23 18:36:45.574283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.715 [2024-07-23 18:36:45.574326] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:45.715 [2024-07-23 18:36:45.574337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.715 [2024-07-23 18:36:45.574346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:45.715 [2024-07-23 18:36:45.574356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:45.715 [2024-07-23 18:36:45.574365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.715 [2024-07-23 18:36:45.578384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.715 [2024-07-23 18:36:45.578424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:45.715 [2024-07-23 18:36:45.578436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.003 ms 00:18:45.715 [2024-07-23 18:36:45.578452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.715 [2024-07-23 18:36:45.578544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.716 [2024-07-23 18:36:45.578583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:45.716 [2024-07-23 18:36:45.578595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:18:45.716 [2024-07-23 18:36:45.578604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.716 [2024-07-23 18:36:45.579604] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:45.716 [2024-07-23 18:36:45.580651] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 126.504 ms, result 0 00:18:45.716 [2024-07-23 18:36:45.581397] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:45.716 [2024-07-23 18:36:45.591003] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:45.716  Copying: 4096/4096 [kB] (average 24 MBps)[2024-07-23 18:36:45.754102] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:45.716 [2024-07-23 18:36:45.755448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.716 [2024-07-23 18:36:45.755487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:45.716 [2024-07-23 18:36:45.755500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:45.716 [2024-07-23 18:36:45.755510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.716 [2024-07-23 18:36:45.755531] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:45.716 [2024-07-23 18:36:45.756225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.716 [2024-07-23 18:36:45.756261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:45.716 [2024-07-23 18:36:45.756272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.681 ms 00:18:45.716 [2024-07-23 18:36:45.756281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.716 [2024-07-23 18:36:45.758416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.716 [2024-07-23 18:36:45.758458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:45.716 [2024-07-23 18:36:45.758476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.118 ms 00:18:45.716 [2024-07-23 18:36:45.758495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.716 [2024-07-23 18:36:45.761659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.716 [2024-07-23 18:36:45.761708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:45.716 [2024-07-23 18:36:45.761720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.142 ms 00:18:45.716 [2024-07-23 18:36:45.761728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.978 [2024-07-23 18:36:45.767181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.978 [2024-07-23 18:36:45.767225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:45.978 [2024-07-23 18:36:45.767243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.430 ms 00:18:45.978 [2024-07-23 18:36:45.767251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.978 [2024-07-23 18:36:45.768799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.978 [2024-07-23 18:36:45.768837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:45.978 [2024-07-23 18:36:45.768848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.489 ms 00:18:45.978 [2024-07-23 18:36:45.768857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.978 [2024-07-23 18:36:45.773732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.979 [2024-07-23 18:36:45.773770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:45.979 [2024-07-23 18:36:45.773794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.854 ms 00:18:45.979 [2024-07-23 18:36:45.773802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.979 [2024-07-23 18:36:45.773906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.979 [2024-07-23 18:36:45.773917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:45.979 [2024-07-23 18:36:45.773930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:18:45.979 [2024-07-23 18:36:45.773940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.979 [2024-07-23 18:36:45.776352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.979 [2024-07-23 18:36:45.776388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:45.979 [2024-07-23 18:36:45.776400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.400 ms 00:18:45.979 [2024-07-23 18:36:45.776418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.979 [2024-07-23 18:36:45.778070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.979 [2024-07-23 18:36:45.778108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:45.979 [2024-07-23 18:36:45.778118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.622 ms 00:18:45.979 [2024-07-23 18:36:45.778126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.979 [2024-07-23 18:36:45.779400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.979 [2024-07-23 18:36:45.779440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:45.979 [2024-07-23 18:36:45.779452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.245 ms 00:18:45.979 [2024-07-23 18:36:45.779460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.979 [2024-07-23 18:36:45.780532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.979 [2024-07-23 18:36:45.780586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:45.979 [2024-07-23 18:36:45.780597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.020 ms 00:18:45.979 [2024-07-23 18:36:45.780606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.979 [2024-07-23 18:36:45.780635] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:45.979 [2024-07-23 18:36:45.780651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:45.979 [2024-07-23 18:36:45.780662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:45.979 [2024-07-23 18:36:45.780672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:45.979 [2024-07-23 18:36:45.780681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:45.979 [2024-07-23 18:36:45.780690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:45.979 [2024-07-23 18:36:45.780699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:45.979 [2024-07-23 18:36:45.780708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:45.979 [2024-07-23 18:36:45.780717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:45.979 [2024-07-23 18:36:45.780727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:45.979 [2024-07-23 18:36:45.780736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:45.979 [2024-07-23 18:36:45.780745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:45.979 [2024-07-23 18:36:45.780754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:45.979 [2024-07-23 18:36:45.780763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:45.979 [2024-07-23 18:36:45.780772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:45.979 [2024-07-23 18:36:45.780780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:45.979 [2024-07-23 18:36:45.780789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:45.979 [2024-07-23 18:36:45.780799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:45.979 [2024-07-23 18:36:45.780808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:45.979 [2024-07-23 18:36:45.780816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:45.979 [2024-07-23 18:36:45.780825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:45.979 [2024-07-23 18:36:45.780834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:45.979 [2024-07-23 18:36:45.780843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:45.979 [2024-07-23 18:36:45.780851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:45.979 [2024-07-23 18:36:45.780861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:45.979 [2024-07-23 18:36:45.780870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:45.979 [2024-07-23 18:36:45.780879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:45.979 [2024-07-23 18:36:45.780889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:45.979 [2024-07-23 18:36:45.780898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:45.979 [2024-07-23 18:36:45.780907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:45.979 [2024-07-23 18:36:45.780917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:45.979 [2024-07-23 18:36:45.780926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:45.979 [2024-07-23 18:36:45.780936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:45.979 [2024-07-23 18:36:45.780945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:45.979 [2024-07-23 18:36:45.780953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:45.979 [2024-07-23 18:36:45.780962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:45.979 [2024-07-23 18:36:45.780971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:45.979 [2024-07-23 18:36:45.780980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:45.979 [2024-07-23 18:36:45.780989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:45.979 [2024-07-23 18:36:45.780997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:45.979 [2024-07-23 18:36:45.781006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:45.979 [2024-07-23 18:36:45.781015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:45.979 [2024-07-23 18:36:45.781023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:45.979 [2024-07-23 18:36:45.781049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:45.979 [2024-07-23 18:36:45.781058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:45.979 [2024-07-23 18:36:45.781066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:45.979 [2024-07-23 18:36:45.781076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:45.979 [2024-07-23 18:36:45.781084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:45.979 [2024-07-23 18:36:45.781092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:45.979 [2024-07-23 18:36:45.781101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:45.979 [2024-07-23 18:36:45.781110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:45.979 [2024-07-23 18:36:45.781119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:45.979 [2024-07-23 18:36:45.781128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:45.979 [2024-07-23 18:36:45.781136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:45.979 [2024-07-23 18:36:45.781145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:45.979 [2024-07-23 18:36:45.781153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:45.979 [2024-07-23 18:36:45.781161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:45.979 [2024-07-23 18:36:45.781170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:45.979 [2024-07-23 18:36:45.781179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:45.979 [2024-07-23 18:36:45.781187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:45.979 [2024-07-23 18:36:45.781196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:45.979 [2024-07-23 18:36:45.781205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:45.979 [2024-07-23 18:36:45.781214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:45.979 [2024-07-23 18:36:45.781222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:45.979 [2024-07-23 18:36:45.781230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:45.979 [2024-07-23 18:36:45.781239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:45.979 [2024-07-23 18:36:45.781247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:45.979 [2024-07-23 18:36:45.781256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:45.979 [2024-07-23 18:36:45.781264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:45.979 [2024-07-23 18:36:45.781272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:45.980 [2024-07-23 18:36:45.781281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:45.980 [2024-07-23 18:36:45.781289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:45.980 [2024-07-23 18:36:45.781298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:45.980 [2024-07-23 18:36:45.781306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:45.980 [2024-07-23 18:36:45.781315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:45.980 [2024-07-23 18:36:45.781324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:45.980 [2024-07-23 18:36:45.781333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:45.980 [2024-07-23 18:36:45.781342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:45.980 [2024-07-23 18:36:45.781351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:45.980 [2024-07-23 18:36:45.781359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:45.980 [2024-07-23 18:36:45.781367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:45.980 [2024-07-23 18:36:45.781375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:45.980 [2024-07-23 18:36:45.781384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:45.980 [2024-07-23 18:36:45.781393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:45.980 [2024-07-23 18:36:45.781401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:45.980 [2024-07-23 18:36:45.781410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:45.980 [2024-07-23 18:36:45.781419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:45.980 [2024-07-23 18:36:45.781427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:45.980 [2024-07-23 18:36:45.781436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:45.980 [2024-07-23 18:36:45.781444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:45.980 [2024-07-23 18:36:45.781453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:45.980 [2024-07-23 18:36:45.781461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:45.980 [2024-07-23 18:36:45.781470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:45.980 [2024-07-23 18:36:45.781482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:45.980 [2024-07-23 18:36:45.781491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:45.980 [2024-07-23 18:36:45.781499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:45.980 [2024-07-23 18:36:45.781508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:45.980 [2024-07-23 18:36:45.781516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:45.980 [2024-07-23 18:36:45.781524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:45.980 [2024-07-23 18:36:45.781532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:45.980 [2024-07-23 18:36:45.781542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:45.980 [2024-07-23 18:36:45.781558] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:45.980 [2024-07-23 18:36:45.781578] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: af89930e-19ee-46e5-a594-fa546b86b4cb 00:18:45.980 [2024-07-23 18:36:45.781588] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:45.980 [2024-07-23 18:36:45.781614] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:45.980 [2024-07-23 18:36:45.781622] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:45.980 [2024-07-23 18:36:45.781631] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:45.980 [2024-07-23 18:36:45.781639] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:45.980 [2024-07-23 18:36:45.781652] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:45.980 [2024-07-23 18:36:45.781660] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:45.980 [2024-07-23 18:36:45.781667] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:45.980 [2024-07-23 18:36:45.781675] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:45.980 [2024-07-23 18:36:45.781684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.980 [2024-07-23 18:36:45.781692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:45.980 [2024-07-23 18:36:45.781702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.053 ms 00:18:45.980 [2024-07-23 18:36:45.781716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.980 [2024-07-23 18:36:45.783509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.980 [2024-07-23 18:36:45.783531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:45.980 [2024-07-23 18:36:45.783541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.777 ms 00:18:45.980 [2024-07-23 18:36:45.783554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.980 [2024-07-23 18:36:45.783671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.980 [2024-07-23 18:36:45.783685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:45.980 [2024-07-23 18:36:45.783696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:18:45.980 [2024-07-23 18:36:45.783704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.980 [2024-07-23 18:36:45.789475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.980 [2024-07-23 18:36:45.789501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:45.980 [2024-07-23 18:36:45.789524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.980 [2024-07-23 18:36:45.789533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.980 [2024-07-23 18:36:45.789607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.980 [2024-07-23 18:36:45.789618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:45.980 [2024-07-23 18:36:45.789627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.980 [2024-07-23 18:36:45.789635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.980 [2024-07-23 18:36:45.789682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.980 [2024-07-23 18:36:45.789692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:45.980 [2024-07-23 18:36:45.789701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.980 [2024-07-23 18:36:45.789714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.980 [2024-07-23 18:36:45.789734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.980 [2024-07-23 18:36:45.789744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:45.980 [2024-07-23 18:36:45.789752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.980 [2024-07-23 18:36:45.789760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.980 [2024-07-23 18:36:45.803287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.980 [2024-07-23 18:36:45.803449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:45.980 [2024-07-23 18:36:45.803504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.980 [2024-07-23 18:36:45.803534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.980 [2024-07-23 18:36:45.811755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.980 [2024-07-23 18:36:45.811878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:45.980 [2024-07-23 18:36:45.811912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.980 [2024-07-23 18:36:45.811980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.980 [2024-07-23 18:36:45.812032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.980 [2024-07-23 18:36:45.812057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:45.980 [2024-07-23 18:36:45.812091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.980 [2024-07-23 18:36:45.812134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.980 [2024-07-23 18:36:45.812203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.980 [2024-07-23 18:36:45.812231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:45.980 [2024-07-23 18:36:45.812295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.980 [2024-07-23 18:36:45.812330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.980 [2024-07-23 18:36:45.812434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.980 [2024-07-23 18:36:45.812483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:45.980 [2024-07-23 18:36:45.812517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.980 [2024-07-23 18:36:45.812551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.980 [2024-07-23 18:36:45.812654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.980 [2024-07-23 18:36:45.812712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:45.980 [2024-07-23 18:36:45.812751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.980 [2024-07-23 18:36:45.812788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.980 [2024-07-23 18:36:45.812864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.980 [2024-07-23 18:36:45.812906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:45.980 [2024-07-23 18:36:45.812940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.980 [2024-07-23 18:36:45.812976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.980 [2024-07-23 18:36:45.813054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.981 [2024-07-23 18:36:45.813094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:45.981 [2024-07-23 18:36:45.813129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.981 [2024-07-23 18:36:45.813182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.981 [2024-07-23 18:36:45.813357] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 57.990 ms, result 0 00:18:46.241 00:18:46.241 00:18:46.241 18:36:46 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=90043 00:18:46.241 18:36:46 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:18:46.241 18:36:46 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 90043 00:18:46.241 18:36:46 ftl.ftl_trim -- common/autotest_common.sh@827 -- # '[' -z 90043 ']' 00:18:46.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:46.241 18:36:46 ftl.ftl_trim -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:46.241 18:36:46 ftl.ftl_trim -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:46.241 18:36:46 ftl.ftl_trim -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:46.241 18:36:46 ftl.ftl_trim -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:46.241 18:36:46 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:18:46.241 [2024-07-23 18:36:46.160114] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:18:46.241 [2024-07-23 18:36:46.160230] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90043 ] 00:18:46.499 [2024-07-23 18:36:46.304010] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:46.499 [2024-07-23 18:36:46.348666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:47.067 18:36:46 ftl.ftl_trim -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:47.067 18:36:46 ftl.ftl_trim -- common/autotest_common.sh@860 -- # return 0 00:18:47.067 18:36:46 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:18:47.067 [2024-07-23 18:36:47.108980] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:47.067 [2024-07-23 18:36:47.109047] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:47.328 [2024-07-23 18:36:47.271752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.328 [2024-07-23 18:36:47.271806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:47.328 [2024-07-23 18:36:47.271824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:47.328 [2024-07-23 18:36:47.271834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.328 [2024-07-23 18:36:47.273812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.328 [2024-07-23 18:36:47.273856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:47.328 [2024-07-23 18:36:47.273872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.962 ms 00:18:47.328 [2024-07-23 18:36:47.273889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.328 [2024-07-23 18:36:47.273966] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:47.328 [2024-07-23 18:36:47.274187] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:47.328 [2024-07-23 18:36:47.274204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.328 [2024-07-23 18:36:47.274215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:47.328 [2024-07-23 18:36:47.274227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.248 ms 00:18:47.328 [2024-07-23 18:36:47.274237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.328 [2024-07-23 18:36:47.275776] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:47.328 [2024-07-23 18:36:47.278280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.328 [2024-07-23 18:36:47.278332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:47.328 [2024-07-23 18:36:47.278343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.514 ms 00:18:47.328 [2024-07-23 18:36:47.278355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.328 [2024-07-23 18:36:47.278416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.328 [2024-07-23 18:36:47.278430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:47.328 [2024-07-23 18:36:47.278450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:18:47.328 [2024-07-23 18:36:47.278463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.328 [2024-07-23 18:36:47.285183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.328 [2024-07-23 18:36:47.285214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:47.328 [2024-07-23 18:36:47.285224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.686 ms 00:18:47.328 [2024-07-23 18:36:47.285235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.328 [2024-07-23 18:36:47.285338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.328 [2024-07-23 18:36:47.285356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:47.328 [2024-07-23 18:36:47.285367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:18:47.328 [2024-07-23 18:36:47.285378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.328 [2024-07-23 18:36:47.285420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.328 [2024-07-23 18:36:47.285432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:47.328 [2024-07-23 18:36:47.285441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:47.328 [2024-07-23 18:36:47.285451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.328 [2024-07-23 18:36:47.285488] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:47.328 [2024-07-23 18:36:47.287146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.328 [2024-07-23 18:36:47.287176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:47.328 [2024-07-23 18:36:47.287190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.669 ms 00:18:47.328 [2024-07-23 18:36:47.287202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.328 [2024-07-23 18:36:47.287274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.328 [2024-07-23 18:36:47.287285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:47.328 [2024-07-23 18:36:47.287297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:47.328 [2024-07-23 18:36:47.287306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.328 [2024-07-23 18:36:47.287330] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:47.328 [2024-07-23 18:36:47.287363] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:47.328 [2024-07-23 18:36:47.287398] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:47.328 [2024-07-23 18:36:47.287417] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:18:47.328 [2024-07-23 18:36:47.287512] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:47.328 [2024-07-23 18:36:47.287524] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:47.328 [2024-07-23 18:36:47.287542] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:18:47.328 [2024-07-23 18:36:47.287552] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:47.328 [2024-07-23 18:36:47.287563] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:47.328 [2024-07-23 18:36:47.287612] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:47.328 [2024-07-23 18:36:47.287626] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:47.328 [2024-07-23 18:36:47.287641] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:47.328 [2024-07-23 18:36:47.287652] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:47.328 [2024-07-23 18:36:47.287665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.328 [2024-07-23 18:36:47.287677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:47.328 [2024-07-23 18:36:47.287686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.340 ms 00:18:47.328 [2024-07-23 18:36:47.287696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.328 [2024-07-23 18:36:47.287764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.328 [2024-07-23 18:36:47.287778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:47.328 [2024-07-23 18:36:47.287787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:18:47.328 [2024-07-23 18:36:47.287805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.328 [2024-07-23 18:36:47.287912] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:47.328 [2024-07-23 18:36:47.287946] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:47.328 [2024-07-23 18:36:47.287964] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:47.328 [2024-07-23 18:36:47.287976] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:47.328 [2024-07-23 18:36:47.287986] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:47.328 [2024-07-23 18:36:47.287998] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:47.328 [2024-07-23 18:36:47.288007] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:47.328 [2024-07-23 18:36:47.288018] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:47.328 [2024-07-23 18:36:47.288026] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:47.328 [2024-07-23 18:36:47.288037] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:47.328 [2024-07-23 18:36:47.288045] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:47.328 [2024-07-23 18:36:47.288056] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:47.328 [2024-07-23 18:36:47.288066] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:47.328 [2024-07-23 18:36:47.288077] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:47.328 [2024-07-23 18:36:47.288086] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:47.328 [2024-07-23 18:36:47.288096] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:47.328 [2024-07-23 18:36:47.288104] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:47.328 [2024-07-23 18:36:47.288114] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:47.328 [2024-07-23 18:36:47.288122] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:47.328 [2024-07-23 18:36:47.288133] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:47.328 [2024-07-23 18:36:47.288142] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:47.328 [2024-07-23 18:36:47.288153] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:47.328 [2024-07-23 18:36:47.288161] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:47.328 [2024-07-23 18:36:47.288171] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:47.328 [2024-07-23 18:36:47.288179] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:47.328 [2024-07-23 18:36:47.288191] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:47.328 [2024-07-23 18:36:47.288199] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:47.328 [2024-07-23 18:36:47.288210] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:47.328 [2024-07-23 18:36:47.288219] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:47.328 [2024-07-23 18:36:47.288229] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:47.328 [2024-07-23 18:36:47.288237] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:47.328 [2024-07-23 18:36:47.288246] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:47.328 [2024-07-23 18:36:47.288255] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:47.328 [2024-07-23 18:36:47.288264] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:47.328 [2024-07-23 18:36:47.288272] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:47.328 [2024-07-23 18:36:47.288283] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:47.329 [2024-07-23 18:36:47.288291] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:47.329 [2024-07-23 18:36:47.288303] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:47.329 [2024-07-23 18:36:47.288312] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:47.329 [2024-07-23 18:36:47.288321] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:47.329 [2024-07-23 18:36:47.288329] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:47.329 [2024-07-23 18:36:47.288338] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:47.329 [2024-07-23 18:36:47.288347] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:47.329 [2024-07-23 18:36:47.288357] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:47.329 [2024-07-23 18:36:47.288365] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:47.329 [2024-07-23 18:36:47.288376] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:47.329 [2024-07-23 18:36:47.288385] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:47.329 [2024-07-23 18:36:47.288396] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:47.329 [2024-07-23 18:36:47.288404] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:47.329 [2024-07-23 18:36:47.288414] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:47.329 [2024-07-23 18:36:47.288422] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:47.329 [2024-07-23 18:36:47.288432] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:47.329 [2024-07-23 18:36:47.288441] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:47.329 [2024-07-23 18:36:47.288454] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:47.329 [2024-07-23 18:36:47.288464] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:47.329 [2024-07-23 18:36:47.288477] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:47.329 [2024-07-23 18:36:47.288485] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:47.329 [2024-07-23 18:36:47.288496] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:47.329 [2024-07-23 18:36:47.288505] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:47.329 [2024-07-23 18:36:47.288515] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:47.329 [2024-07-23 18:36:47.288524] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:47.329 [2024-07-23 18:36:47.288535] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:47.329 [2024-07-23 18:36:47.288544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:47.329 [2024-07-23 18:36:47.288554] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:47.329 [2024-07-23 18:36:47.288563] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:47.329 [2024-07-23 18:36:47.288573] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:47.329 [2024-07-23 18:36:47.288710] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:47.329 [2024-07-23 18:36:47.288760] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:47.329 [2024-07-23 18:36:47.288834] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:47.329 [2024-07-23 18:36:47.288904] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:47.329 [2024-07-23 18:36:47.288968] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:47.329 [2024-07-23 18:36:47.289022] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:47.329 [2024-07-23 18:36:47.289077] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:47.329 [2024-07-23 18:36:47.289138] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:47.329 [2024-07-23 18:36:47.289191] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:47.329 [2024-07-23 18:36:47.289248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.329 [2024-07-23 18:36:47.289287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:47.329 [2024-07-23 18:36:47.289340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.384 ms 00:18:47.329 [2024-07-23 18:36:47.289365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.329 [2024-07-23 18:36:47.301284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.329 [2024-07-23 18:36:47.301389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:47.329 [2024-07-23 18:36:47.301424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.838 ms 00:18:47.329 [2024-07-23 18:36:47.301448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.329 [2024-07-23 18:36:47.301606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.329 [2024-07-23 18:36:47.301657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:47.329 [2024-07-23 18:36:47.301715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:18:47.329 [2024-07-23 18:36:47.301739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.329 [2024-07-23 18:36:47.311748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.329 [2024-07-23 18:36:47.311786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:47.329 [2024-07-23 18:36:47.311812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.967 ms 00:18:47.329 [2024-07-23 18:36:47.311830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.329 [2024-07-23 18:36:47.311897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.329 [2024-07-23 18:36:47.311908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:47.329 [2024-07-23 18:36:47.311919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:47.329 [2024-07-23 18:36:47.311928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.329 [2024-07-23 18:36:47.312358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.329 [2024-07-23 18:36:47.312371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:47.329 [2024-07-23 18:36:47.312383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.409 ms 00:18:47.329 [2024-07-23 18:36:47.312391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.329 [2024-07-23 18:36:47.312509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.329 [2024-07-23 18:36:47.312524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:47.329 [2024-07-23 18:36:47.312538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:18:47.329 [2024-07-23 18:36:47.312547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.329 [2024-07-23 18:36:47.319557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.329 [2024-07-23 18:36:47.319602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:47.329 [2024-07-23 18:36:47.319624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.966 ms 00:18:47.329 [2024-07-23 18:36:47.319634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.329 [2024-07-23 18:36:47.322236] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:18:47.329 [2024-07-23 18:36:47.322274] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:47.329 [2024-07-23 18:36:47.322289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.329 [2024-07-23 18:36:47.322298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:47.329 [2024-07-23 18:36:47.322310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.555 ms 00:18:47.329 [2024-07-23 18:36:47.322319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.329 [2024-07-23 18:36:47.334653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.329 [2024-07-23 18:36:47.334706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:47.329 [2024-07-23 18:36:47.334721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.293 ms 00:18:47.329 [2024-07-23 18:36:47.334730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.329 [2024-07-23 18:36:47.336517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.329 [2024-07-23 18:36:47.336559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:47.329 [2024-07-23 18:36:47.336590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.710 ms 00:18:47.329 [2024-07-23 18:36:47.336599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.329 [2024-07-23 18:36:47.338226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.329 [2024-07-23 18:36:47.338261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:47.329 [2024-07-23 18:36:47.338275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.581 ms 00:18:47.329 [2024-07-23 18:36:47.338284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.329 [2024-07-23 18:36:47.338554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.329 [2024-07-23 18:36:47.338570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:47.329 [2024-07-23 18:36:47.338595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.197 ms 00:18:47.329 [2024-07-23 18:36:47.338604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.593 [2024-07-23 18:36:47.382454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.593 [2024-07-23 18:36:47.382543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:47.593 [2024-07-23 18:36:47.382595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.898 ms 00:18:47.593 [2024-07-23 18:36:47.382614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.593 [2024-07-23 18:36:47.391180] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:47.593 [2024-07-23 18:36:47.408008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.593 [2024-07-23 18:36:47.408075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:47.593 [2024-07-23 18:36:47.408090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.321 ms 00:18:47.593 [2024-07-23 18:36:47.408119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.593 [2024-07-23 18:36:47.408218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.593 [2024-07-23 18:36:47.408233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:47.593 [2024-07-23 18:36:47.408254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:47.593 [2024-07-23 18:36:47.408272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.593 [2024-07-23 18:36:47.408330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.593 [2024-07-23 18:36:47.408342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:47.593 [2024-07-23 18:36:47.408351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:18:47.593 [2024-07-23 18:36:47.408377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.593 [2024-07-23 18:36:47.408404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.593 [2024-07-23 18:36:47.408415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:47.593 [2024-07-23 18:36:47.408425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:47.593 [2024-07-23 18:36:47.408440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.593 [2024-07-23 18:36:47.408475] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:47.593 [2024-07-23 18:36:47.408494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.593 [2024-07-23 18:36:47.408503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:47.593 [2024-07-23 18:36:47.408514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:18:47.593 [2024-07-23 18:36:47.408522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.593 [2024-07-23 18:36:47.412341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.593 [2024-07-23 18:36:47.412381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:47.593 [2024-07-23 18:36:47.412395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.797 ms 00:18:47.593 [2024-07-23 18:36:47.412406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.593 [2024-07-23 18:36:47.412497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.593 [2024-07-23 18:36:47.412508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:47.593 [2024-07-23 18:36:47.412520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:18:47.593 [2024-07-23 18:36:47.412529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.593 [2024-07-23 18:36:47.413497] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:47.593 [2024-07-23 18:36:47.414448] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 141.753 ms, result 0 00:18:47.593 [2024-07-23 18:36:47.415587] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:47.593 Some configs were skipped because the RPC state that can call them passed over. 00:18:47.593 18:36:47 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:18:47.593 [2024-07-23 18:36:47.625876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.593 [2024-07-23 18:36:47.625975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:18:47.593 [2024-07-23 18:36:47.626012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.489 ms 00:18:47.593 [2024-07-23 18:36:47.626039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.594 [2024-07-23 18:36:47.626108] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.722 ms, result 0 00:18:47.594 true 00:18:47.866 18:36:47 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:18:47.866 [2024-07-23 18:36:47.813314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.866 [2024-07-23 18:36:47.813423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:18:47.866 [2024-07-23 18:36:47.813460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.132 ms 00:18:47.866 [2024-07-23 18:36:47.813486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.866 [2024-07-23 18:36:47.813541] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.368 ms, result 0 00:18:47.866 true 00:18:47.866 18:36:47 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 90043 00:18:47.866 18:36:47 ftl.ftl_trim -- common/autotest_common.sh@946 -- # '[' -z 90043 ']' 00:18:47.866 18:36:47 ftl.ftl_trim -- common/autotest_common.sh@950 -- # kill -0 90043 00:18:47.866 18:36:47 ftl.ftl_trim -- common/autotest_common.sh@951 -- # uname 00:18:47.866 18:36:47 ftl.ftl_trim -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:47.866 18:36:47 ftl.ftl_trim -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 90043 00:18:47.866 killing process with pid 90043 00:18:47.866 18:36:47 ftl.ftl_trim -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:47.866 18:36:47 ftl.ftl_trim -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:47.866 18:36:47 ftl.ftl_trim -- common/autotest_common.sh@964 -- # echo 'killing process with pid 90043' 00:18:47.866 18:36:47 ftl.ftl_trim -- common/autotest_common.sh@965 -- # kill 90043 00:18:47.866 18:36:47 ftl.ftl_trim -- common/autotest_common.sh@970 -- # wait 90043 00:18:48.149 [2024-07-23 18:36:48.000976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.149 [2024-07-23 18:36:48.001045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:48.149 [2024-07-23 18:36:48.001060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:48.149 [2024-07-23 18:36:48.001071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.149 [2024-07-23 18:36:48.001096] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:48.149 [2024-07-23 18:36:48.001762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.149 [2024-07-23 18:36:48.001778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:48.149 [2024-07-23 18:36:48.001804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.646 ms 00:18:48.149 [2024-07-23 18:36:48.001813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.149 [2024-07-23 18:36:48.002055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.149 [2024-07-23 18:36:48.002067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:48.149 [2024-07-23 18:36:48.002079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.217 ms 00:18:48.149 [2024-07-23 18:36:48.002087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.149 [2024-07-23 18:36:48.005398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.149 [2024-07-23 18:36:48.005448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:48.149 [2024-07-23 18:36:48.005462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.291 ms 00:18:48.149 [2024-07-23 18:36:48.005472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.149 [2024-07-23 18:36:48.010937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.149 [2024-07-23 18:36:48.010978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:48.150 [2024-07-23 18:36:48.010996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.426 ms 00:18:48.150 [2024-07-23 18:36:48.011005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.150 [2024-07-23 18:36:48.012457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.150 [2024-07-23 18:36:48.012503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:48.150 [2024-07-23 18:36:48.012517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.369 ms 00:18:48.150 [2024-07-23 18:36:48.012526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.150 [2024-07-23 18:36:48.017036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.150 [2024-07-23 18:36:48.017076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:48.150 [2024-07-23 18:36:48.017089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.476 ms 00:18:48.150 [2024-07-23 18:36:48.017101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.150 [2024-07-23 18:36:48.017212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.150 [2024-07-23 18:36:48.017222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:48.150 [2024-07-23 18:36:48.017234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:18:48.150 [2024-07-23 18:36:48.017243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.150 [2024-07-23 18:36:48.019516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.150 [2024-07-23 18:36:48.019552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:48.150 [2024-07-23 18:36:48.019565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.255 ms 00:18:48.150 [2024-07-23 18:36:48.019587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.150 [2024-07-23 18:36:48.021115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.150 [2024-07-23 18:36:48.021153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:48.150 [2024-07-23 18:36:48.021166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.487 ms 00:18:48.150 [2024-07-23 18:36:48.021175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.150 [2024-07-23 18:36:48.022423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.150 [2024-07-23 18:36:48.022467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:48.150 [2024-07-23 18:36:48.022481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.198 ms 00:18:48.150 [2024-07-23 18:36:48.022489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.150 [2024-07-23 18:36:48.023805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.150 [2024-07-23 18:36:48.023883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:48.150 [2024-07-23 18:36:48.023953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.256 ms 00:18:48.150 [2024-07-23 18:36:48.023965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.150 [2024-07-23 18:36:48.024005] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:48.150 [2024-07-23 18:36:48.024021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:48.150 [2024-07-23 18:36:48.024048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:48.150 [2024-07-23 18:36:48.024058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:48.150 [2024-07-23 18:36:48.024071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:48.150 [2024-07-23 18:36:48.024080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:48.150 [2024-07-23 18:36:48.024091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:48.150 [2024-07-23 18:36:48.024100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:48.150 [2024-07-23 18:36:48.024111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:48.150 [2024-07-23 18:36:48.024120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:48.150 [2024-07-23 18:36:48.024131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:48.150 [2024-07-23 18:36:48.024141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:48.150 [2024-07-23 18:36:48.024152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:48.150 [2024-07-23 18:36:48.024161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:48.150 [2024-07-23 18:36:48.024172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:48.150 [2024-07-23 18:36:48.024181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:48.150 [2024-07-23 18:36:48.024194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:48.150 [2024-07-23 18:36:48.024203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:48.150 [2024-07-23 18:36:48.024216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:48.150 [2024-07-23 18:36:48.024225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:48.150 [2024-07-23 18:36:48.024238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:48.150 [2024-07-23 18:36:48.024248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:48.150 [2024-07-23 18:36:48.024259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:48.150 [2024-07-23 18:36:48.024268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:48.150 [2024-07-23 18:36:48.024279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:48.150 [2024-07-23 18:36:48.024289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:48.150 [2024-07-23 18:36:48.024300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:48.150 [2024-07-23 18:36:48.024319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:48.150 [2024-07-23 18:36:48.024332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:48.150 [2024-07-23 18:36:48.024342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:48.150 [2024-07-23 18:36:48.024354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:48.150 [2024-07-23 18:36:48.024362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:48.150 [2024-07-23 18:36:48.024374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:48.150 [2024-07-23 18:36:48.024386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:48.150 [2024-07-23 18:36:48.024399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:48.150 [2024-07-23 18:36:48.024408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:48.150 [2024-07-23 18:36:48.024421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:48.150 [2024-07-23 18:36:48.024430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:48.150 [2024-07-23 18:36:48.024442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:48.150 [2024-07-23 18:36:48.024452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:48.150 [2024-07-23 18:36:48.024463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:48.150 [2024-07-23 18:36:48.024472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:48.150 [2024-07-23 18:36:48.024484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:48.150 [2024-07-23 18:36:48.024493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:48.150 [2024-07-23 18:36:48.024504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:48.150 [2024-07-23 18:36:48.024513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:48.150 [2024-07-23 18:36:48.024524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:48.150 [2024-07-23 18:36:48.024534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:48.150 [2024-07-23 18:36:48.024544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:48.150 [2024-07-23 18:36:48.024554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:48.150 [2024-07-23 18:36:48.024564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:48.150 [2024-07-23 18:36:48.024591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:48.150 [2024-07-23 18:36:48.024604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:48.150 [2024-07-23 18:36:48.024614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:48.150 [2024-07-23 18:36:48.024625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:48.150 [2024-07-23 18:36:48.024634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:48.150 [2024-07-23 18:36:48.024645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:48.150 [2024-07-23 18:36:48.024654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:48.150 [2024-07-23 18:36:48.024666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:48.151 [2024-07-23 18:36:48.024675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:48.151 [2024-07-23 18:36:48.024686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:48.151 [2024-07-23 18:36:48.024695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:48.151 [2024-07-23 18:36:48.024706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:48.151 [2024-07-23 18:36:48.024715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:48.151 [2024-07-23 18:36:48.024726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:48.151 [2024-07-23 18:36:48.024736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:48.151 [2024-07-23 18:36:48.024747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:48.151 [2024-07-23 18:36:48.024756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:48.151 [2024-07-23 18:36:48.024770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:48.151 [2024-07-23 18:36:48.024779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:48.151 [2024-07-23 18:36:48.024791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:48.151 [2024-07-23 18:36:48.024800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:48.151 [2024-07-23 18:36:48.024811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:48.151 [2024-07-23 18:36:48.024820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:48.151 [2024-07-23 18:36:48.024831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:48.151 [2024-07-23 18:36:48.024840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:48.151 [2024-07-23 18:36:48.024851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:48.151 [2024-07-23 18:36:48.024859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:48.151 [2024-07-23 18:36:48.024870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:48.151 [2024-07-23 18:36:48.024880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:48.151 [2024-07-23 18:36:48.024891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:48.151 [2024-07-23 18:36:48.024900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:48.151 [2024-07-23 18:36:48.024911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:48.151 [2024-07-23 18:36:48.024920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:48.151 [2024-07-23 18:36:48.024932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:48.151 [2024-07-23 18:36:48.024941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:48.151 [2024-07-23 18:36:48.024952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:48.151 [2024-07-23 18:36:48.024962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:48.151 [2024-07-23 18:36:48.024974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:48.151 [2024-07-23 18:36:48.024984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:48.151 [2024-07-23 18:36:48.024995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:48.151 [2024-07-23 18:36:48.025003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:48.151 [2024-07-23 18:36:48.025014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:48.151 [2024-07-23 18:36:48.025024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:48.151 [2024-07-23 18:36:48.025036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:48.151 [2024-07-23 18:36:48.025045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:48.151 [2024-07-23 18:36:48.025056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:48.151 [2024-07-23 18:36:48.025066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:48.151 [2024-07-23 18:36:48.025077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:48.151 [2024-07-23 18:36:48.025085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:48.151 [2024-07-23 18:36:48.025098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:48.151 [2024-07-23 18:36:48.025114] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:48.151 [2024-07-23 18:36:48.025124] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: af89930e-19ee-46e5-a594-fa546b86b4cb 00:18:48.151 [2024-07-23 18:36:48.025134] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:48.151 [2024-07-23 18:36:48.025147] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:48.151 [2024-07-23 18:36:48.025157] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:48.151 [2024-07-23 18:36:48.025169] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:48.151 [2024-07-23 18:36:48.025177] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:48.151 [2024-07-23 18:36:48.025188] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:48.151 [2024-07-23 18:36:48.025196] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:48.151 [2024-07-23 18:36:48.025206] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:48.151 [2024-07-23 18:36:48.025214] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:48.151 [2024-07-23 18:36:48.025226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.151 [2024-07-23 18:36:48.025234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:48.151 [2024-07-23 18:36:48.025245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.225 ms 00:18:48.151 [2024-07-23 18:36:48.025254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.151 [2024-07-23 18:36:48.026985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.151 [2024-07-23 18:36:48.027008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:48.151 [2024-07-23 18:36:48.027022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.708 ms 00:18:48.151 [2024-07-23 18:36:48.027039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.151 [2024-07-23 18:36:48.027172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.151 [2024-07-23 18:36:48.027189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:48.151 [2024-07-23 18:36:48.027201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:18:48.151 [2024-07-23 18:36:48.027218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.151 [2024-07-23 18:36:48.033506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:48.151 [2024-07-23 18:36:48.033532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:48.151 [2024-07-23 18:36:48.033545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:48.151 [2024-07-23 18:36:48.033557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.151 [2024-07-23 18:36:48.033641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:48.151 [2024-07-23 18:36:48.033653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:48.151 [2024-07-23 18:36:48.033665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:48.151 [2024-07-23 18:36:48.033673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.151 [2024-07-23 18:36:48.033744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:48.151 [2024-07-23 18:36:48.033762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:48.151 [2024-07-23 18:36:48.033774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:48.151 [2024-07-23 18:36:48.033783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.151 [2024-07-23 18:36:48.033806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:48.151 [2024-07-23 18:36:48.033816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:48.151 [2024-07-23 18:36:48.033827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:48.151 [2024-07-23 18:36:48.033836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.151 [2024-07-23 18:36:48.047268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:48.151 [2024-07-23 18:36:48.047318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:48.151 [2024-07-23 18:36:48.047333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:48.151 [2024-07-23 18:36:48.047342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.151 [2024-07-23 18:36:48.055509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:48.151 [2024-07-23 18:36:48.055554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:48.151 [2024-07-23 18:36:48.055576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:48.151 [2024-07-23 18:36:48.055586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.151 [2024-07-23 18:36:48.055638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:48.151 [2024-07-23 18:36:48.055650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:48.151 [2024-07-23 18:36:48.055662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:48.151 [2024-07-23 18:36:48.055670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.151 [2024-07-23 18:36:48.055704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:48.151 [2024-07-23 18:36:48.055714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:48.151 [2024-07-23 18:36:48.055724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:48.151 [2024-07-23 18:36:48.055744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.151 [2024-07-23 18:36:48.055820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:48.151 [2024-07-23 18:36:48.055832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:48.152 [2024-07-23 18:36:48.055846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:48.152 [2024-07-23 18:36:48.055861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.152 [2024-07-23 18:36:48.055907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:48.152 [2024-07-23 18:36:48.055919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:48.152 [2024-07-23 18:36:48.055930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:48.152 [2024-07-23 18:36:48.055939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.152 [2024-07-23 18:36:48.055998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:48.152 [2024-07-23 18:36:48.056009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:48.152 [2024-07-23 18:36:48.056025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:48.152 [2024-07-23 18:36:48.056034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.152 [2024-07-23 18:36:48.056083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:48.152 [2024-07-23 18:36:48.056094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:48.152 [2024-07-23 18:36:48.056106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:48.152 [2024-07-23 18:36:48.056114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.152 [2024-07-23 18:36:48.056250] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 55.356 ms, result 0 00:18:48.436 18:36:48 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:48.436 [2024-07-23 18:36:48.371561] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:18:48.436 [2024-07-23 18:36:48.371762] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90079 ] 00:18:48.696 [2024-07-23 18:36:48.517020] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:48.696 [2024-07-23 18:36:48.563315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:48.696 [2024-07-23 18:36:48.667058] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:48.696 [2024-07-23 18:36:48.667261] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:48.958 [2024-07-23 18:36:48.814773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.958 [2024-07-23 18:36:48.814927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:48.958 [2024-07-23 18:36:48.814969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:48.958 [2024-07-23 18:36:48.814994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.958 [2024-07-23 18:36:48.817004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.958 [2024-07-23 18:36:48.817096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:48.958 [2024-07-23 18:36:48.817129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.975 ms 00:18:48.958 [2024-07-23 18:36:48.817153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.958 [2024-07-23 18:36:48.817283] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:48.958 [2024-07-23 18:36:48.817625] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:48.958 [2024-07-23 18:36:48.817709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.958 [2024-07-23 18:36:48.817751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:48.958 [2024-07-23 18:36:48.817798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.448 ms 00:18:48.958 [2024-07-23 18:36:48.817844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.958 [2024-07-23 18:36:48.819385] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:48.958 [2024-07-23 18:36:48.822005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.958 [2024-07-23 18:36:48.822086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:48.958 [2024-07-23 18:36:48.822140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.628 ms 00:18:48.958 [2024-07-23 18:36:48.822173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.958 [2024-07-23 18:36:48.822260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.958 [2024-07-23 18:36:48.822322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:48.958 [2024-07-23 18:36:48.822357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:18:48.958 [2024-07-23 18:36:48.822395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.958 [2024-07-23 18:36:48.829233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.958 [2024-07-23 18:36:48.829307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:48.958 [2024-07-23 18:36:48.829361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.763 ms 00:18:48.958 [2024-07-23 18:36:48.829385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.958 [2024-07-23 18:36:48.829540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.958 [2024-07-23 18:36:48.829620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:48.958 [2024-07-23 18:36:48.829654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:18:48.958 [2024-07-23 18:36:48.829701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.958 [2024-07-23 18:36:48.829793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.958 [2024-07-23 18:36:48.829840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:48.958 [2024-07-23 18:36:48.829872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:18:48.958 [2024-07-23 18:36:48.829908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.958 [2024-07-23 18:36:48.829963] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:48.958 [2024-07-23 18:36:48.831667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.958 [2024-07-23 18:36:48.831737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:48.958 [2024-07-23 18:36:48.831775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.715 ms 00:18:48.958 [2024-07-23 18:36:48.831803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.958 [2024-07-23 18:36:48.831876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.958 [2024-07-23 18:36:48.831926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:48.958 [2024-07-23 18:36:48.831960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:48.958 [2024-07-23 18:36:48.831992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.958 [2024-07-23 18:36:48.832042] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:48.958 [2024-07-23 18:36:48.832093] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:48.958 [2024-07-23 18:36:48.832183] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:48.958 [2024-07-23 18:36:48.832245] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:18:48.958 [2024-07-23 18:36:48.832377] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:48.958 [2024-07-23 18:36:48.832426] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:48.958 [2024-07-23 18:36:48.832476] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:18:48.958 [2024-07-23 18:36:48.832526] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:48.958 [2024-07-23 18:36:48.832565] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:48.958 [2024-07-23 18:36:48.832594] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:48.958 [2024-07-23 18:36:48.832603] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:48.958 [2024-07-23 18:36:48.832612] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:48.958 [2024-07-23 18:36:48.832624] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:48.958 [2024-07-23 18:36:48.832635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.958 [2024-07-23 18:36:48.832644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:48.958 [2024-07-23 18:36:48.832654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.597 ms 00:18:48.958 [2024-07-23 18:36:48.832664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.958 [2024-07-23 18:36:48.832745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.958 [2024-07-23 18:36:48.832757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:48.958 [2024-07-23 18:36:48.832766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:18:48.958 [2024-07-23 18:36:48.832776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.958 [2024-07-23 18:36:48.832864] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:48.958 [2024-07-23 18:36:48.832877] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:48.958 [2024-07-23 18:36:48.832897] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:48.958 [2024-07-23 18:36:48.832907] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:48.958 [2024-07-23 18:36:48.832916] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:48.958 [2024-07-23 18:36:48.832924] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:48.958 [2024-07-23 18:36:48.832932] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:48.958 [2024-07-23 18:36:48.832942] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:48.958 [2024-07-23 18:36:48.832950] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:48.958 [2024-07-23 18:36:48.832958] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:48.958 [2024-07-23 18:36:48.832966] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:48.958 [2024-07-23 18:36:48.832974] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:48.958 [2024-07-23 18:36:48.832986] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:48.958 [2024-07-23 18:36:48.832994] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:48.958 [2024-07-23 18:36:48.833003] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:48.958 [2024-07-23 18:36:48.833012] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:48.958 [2024-07-23 18:36:48.833022] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:48.958 [2024-07-23 18:36:48.833030] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:48.958 [2024-07-23 18:36:48.833039] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:48.958 [2024-07-23 18:36:48.833047] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:48.958 [2024-07-23 18:36:48.833055] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:48.959 [2024-07-23 18:36:48.833064] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:48.959 [2024-07-23 18:36:48.833071] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:48.959 [2024-07-23 18:36:48.833079] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:48.959 [2024-07-23 18:36:48.833088] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:48.959 [2024-07-23 18:36:48.833096] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:48.959 [2024-07-23 18:36:48.833104] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:48.959 [2024-07-23 18:36:48.833112] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:48.959 [2024-07-23 18:36:48.833127] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:48.959 [2024-07-23 18:36:48.833137] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:48.959 [2024-07-23 18:36:48.833145] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:48.959 [2024-07-23 18:36:48.833152] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:48.959 [2024-07-23 18:36:48.833160] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:48.959 [2024-07-23 18:36:48.833168] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:48.959 [2024-07-23 18:36:48.833176] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:48.959 [2024-07-23 18:36:48.833184] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:48.959 [2024-07-23 18:36:48.833192] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:48.959 [2024-07-23 18:36:48.833200] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:48.959 [2024-07-23 18:36:48.833209] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:48.959 [2024-07-23 18:36:48.833217] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:48.959 [2024-07-23 18:36:48.833225] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:48.959 [2024-07-23 18:36:48.833233] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:48.959 [2024-07-23 18:36:48.833241] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:48.959 [2024-07-23 18:36:48.833248] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:48.959 [2024-07-23 18:36:48.833259] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:48.959 [2024-07-23 18:36:48.833269] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:48.959 [2024-07-23 18:36:48.833290] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:48.959 [2024-07-23 18:36:48.833300] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:48.959 [2024-07-23 18:36:48.833309] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:48.959 [2024-07-23 18:36:48.833318] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:48.959 [2024-07-23 18:36:48.833326] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:48.959 [2024-07-23 18:36:48.833334] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:48.959 [2024-07-23 18:36:48.833343] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:48.959 [2024-07-23 18:36:48.833352] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:48.959 [2024-07-23 18:36:48.833362] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:48.959 [2024-07-23 18:36:48.833372] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:48.959 [2024-07-23 18:36:48.833381] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:48.959 [2024-07-23 18:36:48.833390] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:48.959 [2024-07-23 18:36:48.833400] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:48.959 [2024-07-23 18:36:48.833409] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:48.959 [2024-07-23 18:36:48.833421] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:48.959 [2024-07-23 18:36:48.833430] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:48.959 [2024-07-23 18:36:48.833439] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:48.959 [2024-07-23 18:36:48.833447] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:48.959 [2024-07-23 18:36:48.833455] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:48.959 [2024-07-23 18:36:48.833463] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:48.959 [2024-07-23 18:36:48.833473] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:48.959 [2024-07-23 18:36:48.833481] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:48.959 [2024-07-23 18:36:48.833490] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:48.959 [2024-07-23 18:36:48.833498] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:48.959 [2024-07-23 18:36:48.833511] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:48.959 [2024-07-23 18:36:48.833530] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:48.959 [2024-07-23 18:36:48.833539] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:48.959 [2024-07-23 18:36:48.833547] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:48.959 [2024-07-23 18:36:48.833556] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:48.959 [2024-07-23 18:36:48.833565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.959 [2024-07-23 18:36:48.833593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:48.959 [2024-07-23 18:36:48.833611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.752 ms 00:18:48.959 [2024-07-23 18:36:48.833629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.959 [2024-07-23 18:36:48.856112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.959 [2024-07-23 18:36:48.856249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:48.959 [2024-07-23 18:36:48.856296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.450 ms 00:18:48.959 [2024-07-23 18:36:48.856336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.959 [2024-07-23 18:36:48.856544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.959 [2024-07-23 18:36:48.856663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:48.959 [2024-07-23 18:36:48.856707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:18:48.959 [2024-07-23 18:36:48.856748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.959 [2024-07-23 18:36:48.866774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.959 [2024-07-23 18:36:48.866889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:48.959 [2024-07-23 18:36:48.866929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.967 ms 00:18:48.959 [2024-07-23 18:36:48.866981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.959 [2024-07-23 18:36:48.867098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.959 [2024-07-23 18:36:48.867138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:48.959 [2024-07-23 18:36:48.867182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:48.959 [2024-07-23 18:36:48.867217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.959 [2024-07-23 18:36:48.867734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.959 [2024-07-23 18:36:48.867792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:48.959 [2024-07-23 18:36:48.867825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.454 ms 00:18:48.959 [2024-07-23 18:36:48.867855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.959 [2024-07-23 18:36:48.868024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.959 [2024-07-23 18:36:48.868069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:48.959 [2024-07-23 18:36:48.868104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:18:48.959 [2024-07-23 18:36:48.868149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.959 [2024-07-23 18:36:48.874427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.959 [2024-07-23 18:36:48.874517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:48.959 [2024-07-23 18:36:48.874556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.222 ms 00:18:48.959 [2024-07-23 18:36:48.874611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.959 [2024-07-23 18:36:48.877306] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:18:48.959 [2024-07-23 18:36:48.877417] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:48.959 [2024-07-23 18:36:48.877466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.959 [2024-07-23 18:36:48.877496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:48.959 [2024-07-23 18:36:48.877520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.695 ms 00:18:48.959 [2024-07-23 18:36:48.877544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.959 [2024-07-23 18:36:48.890685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.959 [2024-07-23 18:36:48.890799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:48.959 [2024-07-23 18:36:48.890855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.067 ms 00:18:48.959 [2024-07-23 18:36:48.890884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.959 [2024-07-23 18:36:48.893088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.959 [2024-07-23 18:36:48.893184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:48.959 [2024-07-23 18:36:48.893217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.108 ms 00:18:48.960 [2024-07-23 18:36:48.893240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.960 [2024-07-23 18:36:48.894919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.960 [2024-07-23 18:36:48.894993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:48.960 [2024-07-23 18:36:48.895047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.604 ms 00:18:48.960 [2024-07-23 18:36:48.895073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.960 [2024-07-23 18:36:48.895431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.960 [2024-07-23 18:36:48.895509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:48.960 [2024-07-23 18:36:48.895545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.240 ms 00:18:48.960 [2024-07-23 18:36:48.895597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.960 [2024-07-23 18:36:48.918366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.960 [2024-07-23 18:36:48.918527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:48.960 [2024-07-23 18:36:48.918565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.731 ms 00:18:48.960 [2024-07-23 18:36:48.918613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.960 [2024-07-23 18:36:48.924658] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:48.960 [2024-07-23 18:36:48.941633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.960 [2024-07-23 18:36:48.941798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:48.960 [2024-07-23 18:36:48.941834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.917 ms 00:18:48.960 [2024-07-23 18:36:48.941873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.960 [2024-07-23 18:36:48.942025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.960 [2024-07-23 18:36:48.942060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:48.960 [2024-07-23 18:36:48.942091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:48.960 [2024-07-23 18:36:48.942124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.960 [2024-07-23 18:36:48.942209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.960 [2024-07-23 18:36:48.942248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:48.960 [2024-07-23 18:36:48.942308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:18:48.960 [2024-07-23 18:36:48.942339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.960 [2024-07-23 18:36:48.942400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.960 [2024-07-23 18:36:48.942438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:48.960 [2024-07-23 18:36:48.942484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:48.960 [2024-07-23 18:36:48.942530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.960 [2024-07-23 18:36:48.942607] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:48.960 [2024-07-23 18:36:48.942654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.960 [2024-07-23 18:36:48.942689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:48.960 [2024-07-23 18:36:48.942733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:18:48.960 [2024-07-23 18:36:48.942764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.960 [2024-07-23 18:36:48.946823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.960 [2024-07-23 18:36:48.946912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:48.960 [2024-07-23 18:36:48.946966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.997 ms 00:18:48.960 [2024-07-23 18:36:48.946998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.960 [2024-07-23 18:36:48.947106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.960 [2024-07-23 18:36:48.947150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:48.960 [2024-07-23 18:36:48.947194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:18:48.960 [2024-07-23 18:36:48.947233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.960 [2024-07-23 18:36:48.948216] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:48.960 [2024-07-23 18:36:48.949210] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 133.430 ms, result 0 00:18:48.960 [2024-07-23 18:36:48.950018] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:48.960 [2024-07-23 18:36:48.958428] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:58.170  Copying: 29/256 [MB] (29 MBps) Copying: 57/256 [MB] (27 MBps) Copying: 85/256 [MB] (28 MBps) Copying: 113/256 [MB] (28 MBps) Copying: 142/256 [MB] (29 MBps) Copying: 174/256 [MB] (31 MBps) Copying: 203/256 [MB] (28 MBps) Copying: 233/256 [MB] (29 MBps) Copying: 256/256 [MB] (average 29 MBps)[2024-07-23 18:36:58.077271] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:58.170 [2024-07-23 18:36:58.078913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.170 [2024-07-23 18:36:58.078967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:58.170 [2024-07-23 18:36:58.078991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:58.170 [2024-07-23 18:36:58.079007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.170 [2024-07-23 18:36:58.079062] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:58.170 [2024-07-23 18:36:58.079825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.170 [2024-07-23 18:36:58.079850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:58.170 [2024-07-23 18:36:58.079865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.741 ms 00:18:58.170 [2024-07-23 18:36:58.079876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.170 [2024-07-23 18:36:58.080165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.170 [2024-07-23 18:36:58.080192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:58.170 [2024-07-23 18:36:58.080210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:18:58.170 [2024-07-23 18:36:58.080223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.170 [2024-07-23 18:36:58.084338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.170 [2024-07-23 18:36:58.084375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:58.170 [2024-07-23 18:36:58.084389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.101 ms 00:18:58.170 [2024-07-23 18:36:58.084420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.170 [2024-07-23 18:36:58.091444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.170 [2024-07-23 18:36:58.091487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:58.170 [2024-07-23 18:36:58.091498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.005 ms 00:18:58.170 [2024-07-23 18:36:58.091514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.170 [2024-07-23 18:36:58.093102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.170 [2024-07-23 18:36:58.093146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:58.170 [2024-07-23 18:36:58.093158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.517 ms 00:18:58.170 [2024-07-23 18:36:58.093167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.170 [2024-07-23 18:36:58.098206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.170 [2024-07-23 18:36:58.098251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:58.170 [2024-07-23 18:36:58.098263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.002 ms 00:18:58.170 [2024-07-23 18:36:58.098272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.170 [2024-07-23 18:36:58.098382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.170 [2024-07-23 18:36:58.098395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:58.170 [2024-07-23 18:36:58.098409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:18:58.170 [2024-07-23 18:36:58.098418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.170 [2024-07-23 18:36:58.100648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.170 [2024-07-23 18:36:58.100687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:58.170 [2024-07-23 18:36:58.100699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.215 ms 00:18:58.170 [2024-07-23 18:36:58.100707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.170 [2024-07-23 18:36:58.102416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.170 [2024-07-23 18:36:58.102459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:58.170 [2024-07-23 18:36:58.102470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.680 ms 00:18:58.170 [2024-07-23 18:36:58.102478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.170 [2024-07-23 18:36:58.103723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.170 [2024-07-23 18:36:58.103761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:58.170 [2024-07-23 18:36:58.103772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.216 ms 00:18:58.170 [2024-07-23 18:36:58.103780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.170 [2024-07-23 18:36:58.104797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.170 [2024-07-23 18:36:58.104836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:58.170 [2024-07-23 18:36:58.104848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.957 ms 00:18:58.170 [2024-07-23 18:36:58.104857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.170 [2024-07-23 18:36:58.104887] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:58.170 [2024-07-23 18:36:58.104907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:58.170 [2024-07-23 18:36:58.104918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:58.170 [2024-07-23 18:36:58.104928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:58.170 [2024-07-23 18:36:58.104938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:58.170 [2024-07-23 18:36:58.104948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:58.170 [2024-07-23 18:36:58.104958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:58.170 [2024-07-23 18:36:58.104967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:58.170 [2024-07-23 18:36:58.104976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:58.170 [2024-07-23 18:36:58.104986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:58.170 [2024-07-23 18:36:58.104997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:58.170 [2024-07-23 18:36:58.105007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:58.171 [2024-07-23 18:36:58.105016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:58.171 [2024-07-23 18:36:58.105026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:58.171 [2024-07-23 18:36:58.105035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:58.171 [2024-07-23 18:36:58.105044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:58.171 [2024-07-23 18:36:58.105054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:58.171 [2024-07-23 18:36:58.105064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:58.171 [2024-07-23 18:36:58.105073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:58.171 [2024-07-23 18:36:58.105082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:58.171 [2024-07-23 18:36:58.105092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:58.171 [2024-07-23 18:36:58.105102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:58.171 [2024-07-23 18:36:58.105111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:58.171 [2024-07-23 18:36:58.105121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:58.171 [2024-07-23 18:36:58.105130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:58.171 [2024-07-23 18:36:58.105139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:58.171 [2024-07-23 18:36:58.105149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:58.171 [2024-07-23 18:36:58.105160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:58.171 [2024-07-23 18:36:58.105170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:58.171 [2024-07-23 18:36:58.105179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:58.171 [2024-07-23 18:36:58.105189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:58.171 [2024-07-23 18:36:58.105199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:58.171 [2024-07-23 18:36:58.105210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:58.171 [2024-07-23 18:36:58.105220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:58.171 [2024-07-23 18:36:58.105229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:58.171 [2024-07-23 18:36:58.105239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:58.171 [2024-07-23 18:36:58.105248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:58.171 [2024-07-23 18:36:58.105257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:58.171 [2024-07-23 18:36:58.105266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:58.171 [2024-07-23 18:36:58.105275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:58.171 [2024-07-23 18:36:58.105285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:58.171 [2024-07-23 18:36:58.105294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:58.171 [2024-07-23 18:36:58.105303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:58.171 [2024-07-23 18:36:58.105330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:58.171 [2024-07-23 18:36:58.105340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:58.171 [2024-07-23 18:36:58.105349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:58.171 [2024-07-23 18:36:58.105358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:58.171 [2024-07-23 18:36:58.105368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:58.171 [2024-07-23 18:36:58.105377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:58.171 [2024-07-23 18:36:58.105387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:58.171 [2024-07-23 18:36:58.105396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:58.171 [2024-07-23 18:36:58.105405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:58.171 [2024-07-23 18:36:58.105415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:58.171 [2024-07-23 18:36:58.105424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:58.171 [2024-07-23 18:36:58.105433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:58.171 [2024-07-23 18:36:58.105442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:58.171 [2024-07-23 18:36:58.105452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:58.171 [2024-07-23 18:36:58.105462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:58.171 [2024-07-23 18:36:58.105472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:58.171 [2024-07-23 18:36:58.105482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:58.171 [2024-07-23 18:36:58.105492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:58.171 [2024-07-23 18:36:58.105501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:58.171 [2024-07-23 18:36:58.105509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:58.171 [2024-07-23 18:36:58.105531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:58.171 [2024-07-23 18:36:58.105540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:58.171 [2024-07-23 18:36:58.105549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:58.171 [2024-07-23 18:36:58.105558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:58.171 [2024-07-23 18:36:58.105567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:58.171 [2024-07-23 18:36:58.105576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:58.171 [2024-07-23 18:36:58.105717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:58.171 [2024-07-23 18:36:58.105754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:58.171 [2024-07-23 18:36:58.105788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:58.171 [2024-07-23 18:36:58.105824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:58.171 [2024-07-23 18:36:58.105919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:58.171 [2024-07-23 18:36:58.105977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:58.171 [2024-07-23 18:36:58.106025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:58.171 [2024-07-23 18:36:58.106096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:58.171 [2024-07-23 18:36:58.106146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:58.171 [2024-07-23 18:36:58.106194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:58.171 [2024-07-23 18:36:58.106242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:58.171 [2024-07-23 18:36:58.106293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:58.171 [2024-07-23 18:36:58.106342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:58.171 [2024-07-23 18:36:58.106392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:58.171 [2024-07-23 18:36:58.106441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:58.171 [2024-07-23 18:36:58.106490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:58.171 [2024-07-23 18:36:58.106539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:58.171 [2024-07-23 18:36:58.106610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:58.171 [2024-07-23 18:36:58.106658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:58.171 [2024-07-23 18:36:58.106709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:58.171 [2024-07-23 18:36:58.106757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:58.171 [2024-07-23 18:36:58.106770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:58.171 [2024-07-23 18:36:58.106780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:58.171 [2024-07-23 18:36:58.106789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:58.171 [2024-07-23 18:36:58.106812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:58.171 [2024-07-23 18:36:58.106822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:58.171 [2024-07-23 18:36:58.106830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:58.172 [2024-07-23 18:36:58.106839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:58.172 [2024-07-23 18:36:58.106850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:58.172 [2024-07-23 18:36:58.106859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:58.172 [2024-07-23 18:36:58.106867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:58.172 [2024-07-23 18:36:58.106876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:58.172 [2024-07-23 18:36:58.106894] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:58.172 [2024-07-23 18:36:58.106904] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: af89930e-19ee-46e5-a594-fa546b86b4cb 00:18:58.172 [2024-07-23 18:36:58.106913] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:58.172 [2024-07-23 18:36:58.106922] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:58.172 [2024-07-23 18:36:58.106930] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:58.172 [2024-07-23 18:36:58.106939] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:58.172 [2024-07-23 18:36:58.106948] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:58.172 [2024-07-23 18:36:58.106974] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:58.172 [2024-07-23 18:36:58.106982] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:58.172 [2024-07-23 18:36:58.106990] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:58.172 [2024-07-23 18:36:58.106999] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:58.172 [2024-07-23 18:36:58.107008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.172 [2024-07-23 18:36:58.107017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:58.172 [2024-07-23 18:36:58.107027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.126 ms 00:18:58.172 [2024-07-23 18:36:58.107039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.172 [2024-07-23 18:36:58.108947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.172 [2024-07-23 18:36:58.108972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:58.172 [2024-07-23 18:36:58.108983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.888 ms 00:18:58.172 [2024-07-23 18:36:58.108998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.172 [2024-07-23 18:36:58.109119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.172 [2024-07-23 18:36:58.109130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:58.172 [2024-07-23 18:36:58.109140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:18:58.172 [2024-07-23 18:36:58.109149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.172 [2024-07-23 18:36:58.115161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:58.172 [2024-07-23 18:36:58.115239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:58.172 [2024-07-23 18:36:58.115279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:58.172 [2024-07-23 18:36:58.115315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.172 [2024-07-23 18:36:58.115397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:58.172 [2024-07-23 18:36:58.115435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:58.172 [2024-07-23 18:36:58.115468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:58.172 [2024-07-23 18:36:58.115509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.172 [2024-07-23 18:36:58.115594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:58.172 [2024-07-23 18:36:58.115637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:58.172 [2024-07-23 18:36:58.115678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:58.172 [2024-07-23 18:36:58.115715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.172 [2024-07-23 18:36:58.115767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:58.172 [2024-07-23 18:36:58.115802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:58.172 [2024-07-23 18:36:58.115831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:58.172 [2024-07-23 18:36:58.115867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.172 [2024-07-23 18:36:58.129524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:58.172 [2024-07-23 18:36:58.129651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:58.172 [2024-07-23 18:36:58.129689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:58.172 [2024-07-23 18:36:58.129722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.172 [2024-07-23 18:36:58.138138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:58.172 [2024-07-23 18:36:58.138235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:58.172 [2024-07-23 18:36:58.138270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:58.172 [2024-07-23 18:36:58.138294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.172 [2024-07-23 18:36:58.138344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:58.172 [2024-07-23 18:36:58.138385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:58.172 [2024-07-23 18:36:58.138439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:58.172 [2024-07-23 18:36:58.138464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.172 [2024-07-23 18:36:58.138517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:58.172 [2024-07-23 18:36:58.138578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:58.172 [2024-07-23 18:36:58.138591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:58.172 [2024-07-23 18:36:58.138600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.172 [2024-07-23 18:36:58.138682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:58.172 [2024-07-23 18:36:58.138695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:58.172 [2024-07-23 18:36:58.138704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:58.172 [2024-07-23 18:36:58.138713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.172 [2024-07-23 18:36:58.138758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:58.172 [2024-07-23 18:36:58.138774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:58.172 [2024-07-23 18:36:58.138783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:58.172 [2024-07-23 18:36:58.138792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.172 [2024-07-23 18:36:58.138837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:58.172 [2024-07-23 18:36:58.138846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:58.172 [2024-07-23 18:36:58.138856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:58.172 [2024-07-23 18:36:58.138874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.172 [2024-07-23 18:36:58.138927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:58.172 [2024-07-23 18:36:58.138946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:58.172 [2024-07-23 18:36:58.138956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:58.172 [2024-07-23 18:36:58.138977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.172 [2024-07-23 18:36:58.139116] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 60.305 ms, result 0 00:18:58.431 00:18:58.431 00:18:58.431 18:36:58 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:18:58.999 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:18:58.999 18:36:58 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:18:58.999 18:36:58 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:18:58.999 18:36:58 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:18:58.999 18:36:58 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:58.999 18:36:58 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:18:58.999 18:36:58 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:18:58.999 Process with pid 90043 is not found 00:18:58.999 18:36:58 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 90043 00:18:58.999 18:36:58 ftl.ftl_trim -- common/autotest_common.sh@946 -- # '[' -z 90043 ']' 00:18:58.999 18:36:58 ftl.ftl_trim -- common/autotest_common.sh@950 -- # kill -0 90043 00:18:58.999 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (90043) - No such process 00:18:58.999 18:36:58 ftl.ftl_trim -- common/autotest_common.sh@973 -- # echo 'Process with pid 90043 is not found' 00:18:58.999 ************************************ 00:18:58.999 END TEST ftl_trim 00:18:58.999 ************************************ 00:18:58.999 00:18:58.999 real 0m50.911s 00:18:58.999 user 1m14.024s 00:18:58.999 sys 0m5.793s 00:18:58.999 18:36:58 ftl.ftl_trim -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:58.999 18:36:58 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:18:58.999 18:36:59 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:18:58.999 18:36:59 ftl -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:18:58.999 18:36:59 ftl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:58.999 18:36:59 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:58.999 ************************************ 00:18:58.999 START TEST ftl_restore 00:18:58.999 ************************************ 00:18:58.999 18:36:59 ftl.ftl_restore -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:18:59.259 * Looking for test storage... 00:18:59.259 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:59.259 18:36:59 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:59.259 18:36:59 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:18:59.259 18:36:59 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:59.259 18:36:59 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:59.259 18:36:59 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:59.259 18:36:59 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:59.259 18:36:59 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:59.259 18:36:59 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:59.259 18:36:59 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:59.259 18:36:59 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:59.259 18:36:59 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:59.259 18:36:59 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:59.259 18:36:59 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:59.259 18:36:59 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:59.259 18:36:59 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:59.259 18:36:59 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:59.259 18:36:59 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:59.259 18:36:59 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:59.259 18:36:59 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:59.259 18:36:59 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:59.259 18:36:59 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:59.259 18:36:59 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:59.259 18:36:59 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:59.259 18:36:59 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:59.259 18:36:59 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:59.259 18:36:59 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:59.259 18:36:59 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:59.259 18:36:59 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:59.259 18:36:59 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:59.259 18:36:59 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:59.259 18:36:59 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:18:59.259 18:36:59 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.b0GzDjGfDP 00:18:59.259 18:36:59 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:18:59.259 18:36:59 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:18:59.259 18:36:59 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:18:59.259 18:36:59 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:18:59.259 18:36:59 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:18:59.259 18:36:59 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:18:59.259 18:36:59 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:18:59.259 18:36:59 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:18:59.259 18:36:59 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=90248 00:18:59.259 18:36:59 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 90248 00:18:59.259 18:36:59 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:59.259 18:36:59 ftl.ftl_restore -- common/autotest_common.sh@827 -- # '[' -z 90248 ']' 00:18:59.259 18:36:59 ftl.ftl_restore -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:59.259 18:36:59 ftl.ftl_restore -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:59.259 18:36:59 ftl.ftl_restore -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:59.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:59.259 18:36:59 ftl.ftl_restore -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:59.259 18:36:59 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:18:59.259 [2024-07-23 18:36:59.271770] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:18:59.259 [2024-07-23 18:36:59.271990] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90248 ] 00:18:59.519 [2024-07-23 18:36:59.420924] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:59.519 [2024-07-23 18:36:59.467987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:00.087 18:37:00 ftl.ftl_restore -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:00.087 18:37:00 ftl.ftl_restore -- common/autotest_common.sh@860 -- # return 0 00:19:00.087 18:37:00 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:00.087 18:37:00 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:19:00.087 18:37:00 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:00.087 18:37:00 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:19:00.087 18:37:00 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:19:00.087 18:37:00 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:00.346 18:37:00 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:00.346 18:37:00 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:19:00.346 18:37:00 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:00.347 18:37:00 ftl.ftl_restore -- common/autotest_common.sh@1374 -- # local bdev_name=nvme0n1 00:19:00.347 18:37:00 ftl.ftl_restore -- common/autotest_common.sh@1375 -- # local bdev_info 00:19:00.347 18:37:00 ftl.ftl_restore -- common/autotest_common.sh@1376 -- # local bs 00:19:00.347 18:37:00 ftl.ftl_restore -- common/autotest_common.sh@1377 -- # local nb 00:19:00.347 18:37:00 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:00.606 18:37:00 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:19:00.606 { 00:19:00.606 "name": "nvme0n1", 00:19:00.606 "aliases": [ 00:19:00.606 "ccaafb5f-1ee7-4651-bc52-cc0e5a91f35d" 00:19:00.606 ], 00:19:00.606 "product_name": "NVMe disk", 00:19:00.606 "block_size": 4096, 00:19:00.606 "num_blocks": 1310720, 00:19:00.606 "uuid": "ccaafb5f-1ee7-4651-bc52-cc0e5a91f35d", 00:19:00.606 "assigned_rate_limits": { 00:19:00.606 "rw_ios_per_sec": 0, 00:19:00.606 "rw_mbytes_per_sec": 0, 00:19:00.606 "r_mbytes_per_sec": 0, 00:19:00.606 "w_mbytes_per_sec": 0 00:19:00.606 }, 00:19:00.606 "claimed": true, 00:19:00.606 "claim_type": "read_many_write_one", 00:19:00.606 "zoned": false, 00:19:00.606 "supported_io_types": { 00:19:00.606 "read": true, 00:19:00.606 "write": true, 00:19:00.606 "unmap": true, 00:19:00.606 "write_zeroes": true, 00:19:00.606 "flush": true, 00:19:00.606 "reset": true, 00:19:00.606 "compare": true, 00:19:00.606 "compare_and_write": false, 00:19:00.606 "abort": true, 00:19:00.606 "nvme_admin": true, 00:19:00.606 "nvme_io": true 00:19:00.606 }, 00:19:00.606 "driver_specific": { 00:19:00.606 "nvme": [ 00:19:00.606 { 00:19:00.606 "pci_address": "0000:00:11.0", 00:19:00.606 "trid": { 00:19:00.606 "trtype": "PCIe", 00:19:00.606 "traddr": "0000:00:11.0" 00:19:00.606 }, 00:19:00.606 "ctrlr_data": { 00:19:00.606 "cntlid": 0, 00:19:00.606 "vendor_id": "0x1b36", 00:19:00.606 "model_number": "QEMU NVMe Ctrl", 00:19:00.606 "serial_number": "12341", 00:19:00.606 "firmware_revision": "8.0.0", 00:19:00.606 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:00.606 "oacs": { 00:19:00.606 "security": 0, 00:19:00.606 "format": 1, 00:19:00.606 "firmware": 0, 00:19:00.606 "ns_manage": 1 00:19:00.606 }, 00:19:00.606 "multi_ctrlr": false, 00:19:00.606 "ana_reporting": false 00:19:00.606 }, 00:19:00.606 "vs": { 00:19:00.606 "nvme_version": "1.4" 00:19:00.606 }, 00:19:00.606 "ns_data": { 00:19:00.606 "id": 1, 00:19:00.606 "can_share": false 00:19:00.606 } 00:19:00.606 } 00:19:00.606 ], 00:19:00.606 "mp_policy": "active_passive" 00:19:00.606 } 00:19:00.606 } 00:19:00.606 ]' 00:19:00.606 18:37:00 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:19:00.606 18:37:00 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # bs=4096 00:19:00.606 18:37:00 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:19:00.606 18:37:00 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # nb=1310720 00:19:00.606 18:37:00 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bdev_size=5120 00:19:00.606 18:37:00 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # echo 5120 00:19:00.606 18:37:00 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:19:00.606 18:37:00 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:00.606 18:37:00 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:19:00.606 18:37:00 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:00.606 18:37:00 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:00.866 18:37:00 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=5ee1391b-e2cc-4fc0-be38-5e3160aed5c2 00:19:00.866 18:37:00 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:19:00.866 18:37:00 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5ee1391b-e2cc-4fc0-be38-5e3160aed5c2 00:19:01.125 18:37:00 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:01.125 18:37:01 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=99fb0c21-6231-4ead-a846-bf92b76828b5 00:19:01.125 18:37:01 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 99fb0c21-6231-4ead-a846-bf92b76828b5 00:19:01.384 18:37:01 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=215fbe90-5cfd-416b-9b6a-a9e8c4a18892 00:19:01.384 18:37:01 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:19:01.384 18:37:01 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 215fbe90-5cfd-416b-9b6a-a9e8c4a18892 00:19:01.384 18:37:01 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:19:01.384 18:37:01 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:01.384 18:37:01 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=215fbe90-5cfd-416b-9b6a-a9e8c4a18892 00:19:01.384 18:37:01 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:19:01.384 18:37:01 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 215fbe90-5cfd-416b-9b6a-a9e8c4a18892 00:19:01.384 18:37:01 ftl.ftl_restore -- common/autotest_common.sh@1374 -- # local bdev_name=215fbe90-5cfd-416b-9b6a-a9e8c4a18892 00:19:01.384 18:37:01 ftl.ftl_restore -- common/autotest_common.sh@1375 -- # local bdev_info 00:19:01.384 18:37:01 ftl.ftl_restore -- common/autotest_common.sh@1376 -- # local bs 00:19:01.384 18:37:01 ftl.ftl_restore -- common/autotest_common.sh@1377 -- # local nb 00:19:01.384 18:37:01 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 215fbe90-5cfd-416b-9b6a-a9e8c4a18892 00:19:01.653 18:37:01 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:19:01.653 { 00:19:01.653 "name": "215fbe90-5cfd-416b-9b6a-a9e8c4a18892", 00:19:01.653 "aliases": [ 00:19:01.653 "lvs/nvme0n1p0" 00:19:01.653 ], 00:19:01.653 "product_name": "Logical Volume", 00:19:01.653 "block_size": 4096, 00:19:01.653 "num_blocks": 26476544, 00:19:01.653 "uuid": "215fbe90-5cfd-416b-9b6a-a9e8c4a18892", 00:19:01.653 "assigned_rate_limits": { 00:19:01.653 "rw_ios_per_sec": 0, 00:19:01.653 "rw_mbytes_per_sec": 0, 00:19:01.653 "r_mbytes_per_sec": 0, 00:19:01.653 "w_mbytes_per_sec": 0 00:19:01.653 }, 00:19:01.653 "claimed": false, 00:19:01.653 "zoned": false, 00:19:01.653 "supported_io_types": { 00:19:01.653 "read": true, 00:19:01.653 "write": true, 00:19:01.653 "unmap": true, 00:19:01.653 "write_zeroes": true, 00:19:01.653 "flush": false, 00:19:01.653 "reset": true, 00:19:01.653 "compare": false, 00:19:01.653 "compare_and_write": false, 00:19:01.653 "abort": false, 00:19:01.653 "nvme_admin": false, 00:19:01.653 "nvme_io": false 00:19:01.653 }, 00:19:01.653 "driver_specific": { 00:19:01.653 "lvol": { 00:19:01.653 "lvol_store_uuid": "99fb0c21-6231-4ead-a846-bf92b76828b5", 00:19:01.653 "base_bdev": "nvme0n1", 00:19:01.653 "thin_provision": true, 00:19:01.653 "num_allocated_clusters": 0, 00:19:01.653 "snapshot": false, 00:19:01.653 "clone": false, 00:19:01.653 "esnap_clone": false 00:19:01.653 } 00:19:01.653 } 00:19:01.653 } 00:19:01.653 ]' 00:19:01.653 18:37:01 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:19:01.653 18:37:01 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # bs=4096 00:19:01.653 18:37:01 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:19:01.653 18:37:01 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # nb=26476544 00:19:01.653 18:37:01 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:19:01.653 18:37:01 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # echo 103424 00:19:01.653 18:37:01 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:19:01.653 18:37:01 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:19:01.653 18:37:01 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:01.926 18:37:01 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:01.926 18:37:01 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:01.926 18:37:01 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 215fbe90-5cfd-416b-9b6a-a9e8c4a18892 00:19:01.926 18:37:01 ftl.ftl_restore -- common/autotest_common.sh@1374 -- # local bdev_name=215fbe90-5cfd-416b-9b6a-a9e8c4a18892 00:19:01.926 18:37:01 ftl.ftl_restore -- common/autotest_common.sh@1375 -- # local bdev_info 00:19:01.926 18:37:01 ftl.ftl_restore -- common/autotest_common.sh@1376 -- # local bs 00:19:01.926 18:37:01 ftl.ftl_restore -- common/autotest_common.sh@1377 -- # local nb 00:19:01.926 18:37:01 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 215fbe90-5cfd-416b-9b6a-a9e8c4a18892 00:19:02.186 18:37:02 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:19:02.186 { 00:19:02.186 "name": "215fbe90-5cfd-416b-9b6a-a9e8c4a18892", 00:19:02.186 "aliases": [ 00:19:02.186 "lvs/nvme0n1p0" 00:19:02.186 ], 00:19:02.186 "product_name": "Logical Volume", 00:19:02.186 "block_size": 4096, 00:19:02.186 "num_blocks": 26476544, 00:19:02.186 "uuid": "215fbe90-5cfd-416b-9b6a-a9e8c4a18892", 00:19:02.186 "assigned_rate_limits": { 00:19:02.186 "rw_ios_per_sec": 0, 00:19:02.186 "rw_mbytes_per_sec": 0, 00:19:02.186 "r_mbytes_per_sec": 0, 00:19:02.186 "w_mbytes_per_sec": 0 00:19:02.186 }, 00:19:02.186 "claimed": false, 00:19:02.186 "zoned": false, 00:19:02.186 "supported_io_types": { 00:19:02.186 "read": true, 00:19:02.186 "write": true, 00:19:02.186 "unmap": true, 00:19:02.186 "write_zeroes": true, 00:19:02.186 "flush": false, 00:19:02.186 "reset": true, 00:19:02.186 "compare": false, 00:19:02.186 "compare_and_write": false, 00:19:02.186 "abort": false, 00:19:02.186 "nvme_admin": false, 00:19:02.186 "nvme_io": false 00:19:02.186 }, 00:19:02.186 "driver_specific": { 00:19:02.186 "lvol": { 00:19:02.186 "lvol_store_uuid": "99fb0c21-6231-4ead-a846-bf92b76828b5", 00:19:02.186 "base_bdev": "nvme0n1", 00:19:02.186 "thin_provision": true, 00:19:02.186 "num_allocated_clusters": 0, 00:19:02.186 "snapshot": false, 00:19:02.186 "clone": false, 00:19:02.186 "esnap_clone": false 00:19:02.186 } 00:19:02.186 } 00:19:02.186 } 00:19:02.186 ]' 00:19:02.186 18:37:02 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:19:02.186 18:37:02 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # bs=4096 00:19:02.186 18:37:02 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:19:02.186 18:37:02 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # nb=26476544 00:19:02.186 18:37:02 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:19:02.186 18:37:02 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # echo 103424 00:19:02.186 18:37:02 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:19:02.186 18:37:02 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:02.445 18:37:02 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:19:02.445 18:37:02 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 215fbe90-5cfd-416b-9b6a-a9e8c4a18892 00:19:02.445 18:37:02 ftl.ftl_restore -- common/autotest_common.sh@1374 -- # local bdev_name=215fbe90-5cfd-416b-9b6a-a9e8c4a18892 00:19:02.445 18:37:02 ftl.ftl_restore -- common/autotest_common.sh@1375 -- # local bdev_info 00:19:02.445 18:37:02 ftl.ftl_restore -- common/autotest_common.sh@1376 -- # local bs 00:19:02.445 18:37:02 ftl.ftl_restore -- common/autotest_common.sh@1377 -- # local nb 00:19:02.445 18:37:02 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 215fbe90-5cfd-416b-9b6a-a9e8c4a18892 00:19:02.445 18:37:02 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:19:02.445 { 00:19:02.445 "name": "215fbe90-5cfd-416b-9b6a-a9e8c4a18892", 00:19:02.445 "aliases": [ 00:19:02.445 "lvs/nvme0n1p0" 00:19:02.445 ], 00:19:02.445 "product_name": "Logical Volume", 00:19:02.445 "block_size": 4096, 00:19:02.445 "num_blocks": 26476544, 00:19:02.445 "uuid": "215fbe90-5cfd-416b-9b6a-a9e8c4a18892", 00:19:02.445 "assigned_rate_limits": { 00:19:02.445 "rw_ios_per_sec": 0, 00:19:02.445 "rw_mbytes_per_sec": 0, 00:19:02.445 "r_mbytes_per_sec": 0, 00:19:02.445 "w_mbytes_per_sec": 0 00:19:02.445 }, 00:19:02.445 "claimed": false, 00:19:02.446 "zoned": false, 00:19:02.446 "supported_io_types": { 00:19:02.446 "read": true, 00:19:02.446 "write": true, 00:19:02.446 "unmap": true, 00:19:02.446 "write_zeroes": true, 00:19:02.446 "flush": false, 00:19:02.446 "reset": true, 00:19:02.446 "compare": false, 00:19:02.446 "compare_and_write": false, 00:19:02.446 "abort": false, 00:19:02.446 "nvme_admin": false, 00:19:02.446 "nvme_io": false 00:19:02.446 }, 00:19:02.446 "driver_specific": { 00:19:02.446 "lvol": { 00:19:02.446 "lvol_store_uuid": "99fb0c21-6231-4ead-a846-bf92b76828b5", 00:19:02.446 "base_bdev": "nvme0n1", 00:19:02.446 "thin_provision": true, 00:19:02.446 "num_allocated_clusters": 0, 00:19:02.446 "snapshot": false, 00:19:02.446 "clone": false, 00:19:02.446 "esnap_clone": false 00:19:02.446 } 00:19:02.446 } 00:19:02.446 } 00:19:02.446 ]' 00:19:02.446 18:37:02 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:19:02.706 18:37:02 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # bs=4096 00:19:02.706 18:37:02 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:19:02.706 18:37:02 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # nb=26476544 00:19:02.706 18:37:02 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:19:02.706 18:37:02 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # echo 103424 00:19:02.706 18:37:02 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:19:02.706 18:37:02 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 215fbe90-5cfd-416b-9b6a-a9e8c4a18892 --l2p_dram_limit 10' 00:19:02.706 18:37:02 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:19:02.706 18:37:02 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:19:02.706 18:37:02 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:19:02.706 18:37:02 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:19:02.706 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:19:02.706 18:37:02 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 215fbe90-5cfd-416b-9b6a-a9e8c4a18892 --l2p_dram_limit 10 -c nvc0n1p0 00:19:02.706 [2024-07-23 18:37:02.734114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.706 [2024-07-23 18:37:02.734227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:02.706 [2024-07-23 18:37:02.734279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:02.706 [2024-07-23 18:37:02.734322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.706 [2024-07-23 18:37:02.734434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.706 [2024-07-23 18:37:02.734482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:02.706 [2024-07-23 18:37:02.734520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:19:02.706 [2024-07-23 18:37:02.734559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.706 [2024-07-23 18:37:02.734642] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:02.706 [2024-07-23 18:37:02.735009] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:02.706 [2024-07-23 18:37:02.735097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.706 [2024-07-23 18:37:02.735138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:02.706 [2024-07-23 18:37:02.735192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.468 ms 00:19:02.706 [2024-07-23 18:37:02.735226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.706 [2024-07-23 18:37:02.735355] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID e8384e09-63b5-481c-89d9-86ebd1f98094 00:19:02.706 [2024-07-23 18:37:02.736875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.706 [2024-07-23 18:37:02.736955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:02.706 [2024-07-23 18:37:02.736997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:19:02.706 [2024-07-23 18:37:02.737037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.706 [2024-07-23 18:37:02.744669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.706 [2024-07-23 18:37:02.744756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:02.706 [2024-07-23 18:37:02.744790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.578 ms 00:19:02.706 [2024-07-23 18:37:02.744818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.706 [2024-07-23 18:37:02.744918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.706 [2024-07-23 18:37:02.744960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:02.707 [2024-07-23 18:37:02.744997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:19:02.707 [2024-07-23 18:37:02.745026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.707 [2024-07-23 18:37:02.745143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.707 [2024-07-23 18:37:02.745208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:02.707 [2024-07-23 18:37:02.745239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:19:02.707 [2024-07-23 18:37:02.745274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.707 [2024-07-23 18:37:02.745334] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:02.707 [2024-07-23 18:37:02.747116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.707 [2024-07-23 18:37:02.747188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:02.707 [2024-07-23 18:37:02.747228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.792 ms 00:19:02.707 [2024-07-23 18:37:02.747264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.707 [2024-07-23 18:37:02.747339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.707 [2024-07-23 18:37:02.747392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:02.707 [2024-07-23 18:37:02.747428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:02.707 [2024-07-23 18:37:02.747466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.707 [2024-07-23 18:37:02.747510] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:02.707 [2024-07-23 18:37:02.747698] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:02.707 [2024-07-23 18:37:02.747761] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:02.707 [2024-07-23 18:37:02.747838] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:19:02.707 [2024-07-23 18:37:02.747895] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:02.707 [2024-07-23 18:37:02.747959] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:02.707 [2024-07-23 18:37:02.748013] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:02.707 [2024-07-23 18:37:02.748047] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:02.707 [2024-07-23 18:37:02.748102] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:02.707 [2024-07-23 18:37:02.748137] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:02.707 [2024-07-23 18:37:02.748182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.707 [2024-07-23 18:37:02.748221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:02.707 [2024-07-23 18:37:02.748262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.678 ms 00:19:02.707 [2024-07-23 18:37:02.748300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.707 [2024-07-23 18:37:02.748410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.707 [2024-07-23 18:37:02.748460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:02.707 [2024-07-23 18:37:02.748498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:19:02.707 [2024-07-23 18:37:02.748536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.707 [2024-07-23 18:37:02.748676] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:02.707 [2024-07-23 18:37:02.748720] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:02.707 [2024-07-23 18:37:02.748760] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:02.707 [2024-07-23 18:37:02.748798] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:02.707 [2024-07-23 18:37:02.748840] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:02.707 [2024-07-23 18:37:02.748875] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:02.707 [2024-07-23 18:37:02.748915] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:02.707 [2024-07-23 18:37:02.748953] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:02.707 [2024-07-23 18:37:02.748991] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:02.707 [2024-07-23 18:37:02.749027] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:02.707 [2024-07-23 18:37:02.749065] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:02.707 [2024-07-23 18:37:02.749103] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:02.707 [2024-07-23 18:37:02.749142] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:02.707 [2024-07-23 18:37:02.749178] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:02.707 [2024-07-23 18:37:02.749219] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:02.707 [2024-07-23 18:37:02.749256] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:02.707 [2024-07-23 18:37:02.749306] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:02.707 [2024-07-23 18:37:02.749342] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:02.707 [2024-07-23 18:37:02.749377] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:02.707 [2024-07-23 18:37:02.749413] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:02.707 [2024-07-23 18:37:02.749450] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:02.707 [2024-07-23 18:37:02.749488] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:02.707 [2024-07-23 18:37:02.749524] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:02.707 [2024-07-23 18:37:02.749560] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:02.707 [2024-07-23 18:37:02.749608] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:02.707 [2024-07-23 18:37:02.749639] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:02.707 [2024-07-23 18:37:02.749681] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:02.707 [2024-07-23 18:37:02.749716] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:02.707 [2024-07-23 18:37:02.749752] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:02.707 [2024-07-23 18:37:02.749788] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:02.707 [2024-07-23 18:37:02.749825] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:02.707 [2024-07-23 18:37:02.749860] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:02.707 [2024-07-23 18:37:02.749895] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:02.707 [2024-07-23 18:37:02.749930] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:02.707 [2024-07-23 18:37:02.749966] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:02.707 [2024-07-23 18:37:02.749979] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:02.707 [2024-07-23 18:37:02.749990] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:02.707 [2024-07-23 18:37:02.749999] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:02.707 [2024-07-23 18:37:02.750009] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:02.707 [2024-07-23 18:37:02.750018] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:02.707 [2024-07-23 18:37:02.750028] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:02.707 [2024-07-23 18:37:02.750037] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:02.707 [2024-07-23 18:37:02.750047] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:02.707 [2024-07-23 18:37:02.750055] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:02.707 [2024-07-23 18:37:02.750079] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:02.707 [2024-07-23 18:37:02.750089] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:02.707 [2024-07-23 18:37:02.750102] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:02.707 [2024-07-23 18:37:02.750114] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:02.707 [2024-07-23 18:37:02.750128] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:02.707 [2024-07-23 18:37:02.750137] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:02.707 [2024-07-23 18:37:02.750147] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:02.707 [2024-07-23 18:37:02.750156] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:02.707 [2024-07-23 18:37:02.750167] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:02.707 [2024-07-23 18:37:02.750181] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:02.707 [2024-07-23 18:37:02.750195] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:02.707 [2024-07-23 18:37:02.750216] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:02.707 [2024-07-23 18:37:02.750229] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:02.707 [2024-07-23 18:37:02.750239] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:02.707 [2024-07-23 18:37:02.750250] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:02.707 [2024-07-23 18:37:02.750259] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:02.707 [2024-07-23 18:37:02.750270] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:02.707 [2024-07-23 18:37:02.750279] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:02.707 [2024-07-23 18:37:02.750291] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:02.708 [2024-07-23 18:37:02.750300] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:02.708 [2024-07-23 18:37:02.750312] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:02.708 [2024-07-23 18:37:02.750321] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:02.708 [2024-07-23 18:37:02.750332] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:02.708 [2024-07-23 18:37:02.750340] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:02.708 [2024-07-23 18:37:02.750352] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:02.708 [2024-07-23 18:37:02.750360] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:02.708 [2024-07-23 18:37:02.750374] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:02.708 [2024-07-23 18:37:02.750384] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:02.708 [2024-07-23 18:37:02.750395] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:02.708 [2024-07-23 18:37:02.750405] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:02.708 [2024-07-23 18:37:02.750416] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:02.708 [2024-07-23 18:37:02.750426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.708 [2024-07-23 18:37:02.750438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:02.708 [2024-07-23 18:37:02.750448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.811 ms 00:19:02.708 [2024-07-23 18:37:02.750462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.708 [2024-07-23 18:37:02.750529] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:02.708 [2024-07-23 18:37:02.750544] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:06.906 [2024-07-23 18:37:06.242066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.907 [2024-07-23 18:37:06.242142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:06.907 [2024-07-23 18:37:06.242173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3498.265 ms 00:19:06.907 [2024-07-23 18:37:06.242184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.907 [2024-07-23 18:37:06.261978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.907 [2024-07-23 18:37:06.262039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:06.907 [2024-07-23 18:37:06.262070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.708 ms 00:19:06.907 [2024-07-23 18:37:06.262104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.907 [2024-07-23 18:37:06.262236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.907 [2024-07-23 18:37:06.262255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:06.907 [2024-07-23 18:37:06.262265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:19:06.907 [2024-07-23 18:37:06.262275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.907 [2024-07-23 18:37:06.278443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.907 [2024-07-23 18:37:06.278493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:06.907 [2024-07-23 18:37:06.278504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.145 ms 00:19:06.907 [2024-07-23 18:37:06.278531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.907 [2024-07-23 18:37:06.278572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.907 [2024-07-23 18:37:06.278602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:06.907 [2024-07-23 18:37:06.278611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:06.907 [2024-07-23 18:37:06.278621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.907 [2024-07-23 18:37:06.279425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.907 [2024-07-23 18:37:06.279458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:06.907 [2024-07-23 18:37:06.279467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.755 ms 00:19:06.907 [2024-07-23 18:37:06.279477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.907 [2024-07-23 18:37:06.279589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.907 [2024-07-23 18:37:06.279609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:06.907 [2024-07-23 18:37:06.279617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:19:06.907 [2024-07-23 18:37:06.279626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.907 [2024-07-23 18:37:06.291387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.907 [2024-07-23 18:37:06.291427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:06.907 [2024-07-23 18:37:06.291438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.763 ms 00:19:06.907 [2024-07-23 18:37:06.291463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.907 [2024-07-23 18:37:06.300165] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:06.907 [2024-07-23 18:37:06.305295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.907 [2024-07-23 18:37:06.305320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:06.907 [2024-07-23 18:37:06.305333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.767 ms 00:19:06.907 [2024-07-23 18:37:06.305356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.907 [2024-07-23 18:37:06.389254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.907 [2024-07-23 18:37:06.389325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:06.907 [2024-07-23 18:37:06.389342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 84.018 ms 00:19:06.907 [2024-07-23 18:37:06.389369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.907 [2024-07-23 18:37:06.389589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.907 [2024-07-23 18:37:06.389601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:06.907 [2024-07-23 18:37:06.389613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.171 ms 00:19:06.907 [2024-07-23 18:37:06.389620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.907 [2024-07-23 18:37:06.393365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.907 [2024-07-23 18:37:06.393402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:06.907 [2024-07-23 18:37:06.393416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.725 ms 00:19:06.907 [2024-07-23 18:37:06.393427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.907 [2024-07-23 18:37:06.396135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.907 [2024-07-23 18:37:06.396169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:06.907 [2024-07-23 18:37:06.396182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.674 ms 00:19:06.907 [2024-07-23 18:37:06.396190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.907 [2024-07-23 18:37:06.396478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.907 [2024-07-23 18:37:06.396490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:06.907 [2024-07-23 18:37:06.396513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.255 ms 00:19:06.907 [2024-07-23 18:37:06.396522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.907 [2024-07-23 18:37:06.440824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.907 [2024-07-23 18:37:06.440955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:06.907 [2024-07-23 18:37:06.440993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.353 ms 00:19:06.907 [2024-07-23 18:37:06.441019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.907 [2024-07-23 18:37:06.446435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.907 [2024-07-23 18:37:06.446517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:06.907 [2024-07-23 18:37:06.446550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.368 ms 00:19:06.907 [2024-07-23 18:37:06.446578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.907 [2024-07-23 18:37:06.449803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.907 [2024-07-23 18:37:06.449871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:06.907 [2024-07-23 18:37:06.449901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.176 ms 00:19:06.907 [2024-07-23 18:37:06.449919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.907 [2024-07-23 18:37:06.453517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.907 [2024-07-23 18:37:06.453601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:06.907 [2024-07-23 18:37:06.453634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.554 ms 00:19:06.907 [2024-07-23 18:37:06.453655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.907 [2024-07-23 18:37:06.453722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.907 [2024-07-23 18:37:06.453786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:06.907 [2024-07-23 18:37:06.453800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:19:06.907 [2024-07-23 18:37:06.453807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.907 [2024-07-23 18:37:06.453881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.907 [2024-07-23 18:37:06.453890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:06.907 [2024-07-23 18:37:06.453901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:19:06.907 [2024-07-23 18:37:06.453908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.907 [2024-07-23 18:37:06.455290] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3727.871 ms, result 0 00:19:06.907 { 00:19:06.907 "name": "ftl0", 00:19:06.907 "uuid": "e8384e09-63b5-481c-89d9-86ebd1f98094" 00:19:06.907 } 00:19:06.907 18:37:06 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:19:06.907 18:37:06 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:19:06.907 18:37:06 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:19:06.907 18:37:06 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:19:06.907 [2024-07-23 18:37:06.832452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.907 [2024-07-23 18:37:06.832531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:06.907 [2024-07-23 18:37:06.832578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:06.907 [2024-07-23 18:37:06.832614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.907 [2024-07-23 18:37:06.832652] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:06.907 [2024-07-23 18:37:06.833930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.907 [2024-07-23 18:37:06.833976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:06.907 [2024-07-23 18:37:06.834015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.242 ms 00:19:06.907 [2024-07-23 18:37:06.834035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.907 [2024-07-23 18:37:06.834270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.907 [2024-07-23 18:37:06.834305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:06.907 [2024-07-23 18:37:06.834337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.195 ms 00:19:06.907 [2024-07-23 18:37:06.834363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.907 [2024-07-23 18:37:06.836803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.907 [2024-07-23 18:37:06.836846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:06.907 [2024-07-23 18:37:06.836874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.408 ms 00:19:06.907 [2024-07-23 18:37:06.836894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.908 [2024-07-23 18:37:06.841772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.908 [2024-07-23 18:37:06.841829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:06.908 [2024-07-23 18:37:06.841858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.854 ms 00:19:06.908 [2024-07-23 18:37:06.841878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.908 [2024-07-23 18:37:06.843643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.908 [2024-07-23 18:37:06.843720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:06.908 [2024-07-23 18:37:06.843764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.686 ms 00:19:06.908 [2024-07-23 18:37:06.843783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.908 [2024-07-23 18:37:06.849121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.908 [2024-07-23 18:37:06.849192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:06.908 [2024-07-23 18:37:06.849210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.300 ms 00:19:06.908 [2024-07-23 18:37:06.849218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.908 [2024-07-23 18:37:06.849336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.908 [2024-07-23 18:37:06.849347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:06.908 [2024-07-23 18:37:06.849360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:19:06.908 [2024-07-23 18:37:06.849371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.908 [2024-07-23 18:37:06.851222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.908 [2024-07-23 18:37:06.851262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:06.908 [2024-07-23 18:37:06.851274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.832 ms 00:19:06.908 [2024-07-23 18:37:06.851282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.908 [2024-07-23 18:37:06.852791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.908 [2024-07-23 18:37:06.852820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:06.908 [2024-07-23 18:37:06.852835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.479 ms 00:19:06.908 [2024-07-23 18:37:06.852842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.908 [2024-07-23 18:37:06.854084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.908 [2024-07-23 18:37:06.854113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:06.908 [2024-07-23 18:37:06.854124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.213 ms 00:19:06.908 [2024-07-23 18:37:06.854130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.908 [2024-07-23 18:37:06.855333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.908 [2024-07-23 18:37:06.855391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:06.908 [2024-07-23 18:37:06.855428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.137 ms 00:19:06.908 [2024-07-23 18:37:06.855448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.908 [2024-07-23 18:37:06.855523] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:06.908 [2024-07-23 18:37:06.855560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:06.908 [2024-07-23 18:37:06.855616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:06.908 [2024-07-23 18:37:06.855685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:06.908 [2024-07-23 18:37:06.855752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:06.908 [2024-07-23 18:37:06.855794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:06.908 [2024-07-23 18:37:06.855841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:06.908 [2024-07-23 18:37:06.855882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:06.908 [2024-07-23 18:37:06.855923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:06.908 [2024-07-23 18:37:06.855962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:06.908 [2024-07-23 18:37:06.855998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:06.908 [2024-07-23 18:37:06.856045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:06.908 [2024-07-23 18:37:06.856075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:06.908 [2024-07-23 18:37:06.856083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:06.908 [2024-07-23 18:37:06.856092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:06.908 [2024-07-23 18:37:06.856100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:06.908 [2024-07-23 18:37:06.856109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:06.908 [2024-07-23 18:37:06.856116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:06.908 [2024-07-23 18:37:06.856126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:06.908 [2024-07-23 18:37:06.856133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:06.908 [2024-07-23 18:37:06.856142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:06.908 [2024-07-23 18:37:06.856150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:06.908 [2024-07-23 18:37:06.856162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:06.908 [2024-07-23 18:37:06.856169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:06.908 [2024-07-23 18:37:06.856179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:06.908 [2024-07-23 18:37:06.856186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:06.908 [2024-07-23 18:37:06.856196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:06.908 [2024-07-23 18:37:06.856203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:06.908 [2024-07-23 18:37:06.856212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:06.908 [2024-07-23 18:37:06.856219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:06.908 [2024-07-23 18:37:06.856230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:06.908 [2024-07-23 18:37:06.856238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:06.908 [2024-07-23 18:37:06.856248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:06.908 [2024-07-23 18:37:06.856255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:06.908 [2024-07-23 18:37:06.856265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:06.908 [2024-07-23 18:37:06.856283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:06.908 [2024-07-23 18:37:06.856292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:06.908 [2024-07-23 18:37:06.856300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:06.908 [2024-07-23 18:37:06.856313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:06.908 [2024-07-23 18:37:06.856320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:06.908 [2024-07-23 18:37:06.856330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:06.908 [2024-07-23 18:37:06.856337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:06.908 [2024-07-23 18:37:06.856347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:06.908 [2024-07-23 18:37:06.856355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:06.908 [2024-07-23 18:37:06.856364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:06.908 [2024-07-23 18:37:06.856372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:06.908 [2024-07-23 18:37:06.856381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:06.908 [2024-07-23 18:37:06.856389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:06.908 [2024-07-23 18:37:06.856398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:06.908 [2024-07-23 18:37:06.856405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:06.908 [2024-07-23 18:37:06.856415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:06.908 [2024-07-23 18:37:06.856422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:06.908 [2024-07-23 18:37:06.856432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:06.908 [2024-07-23 18:37:06.856439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:06.908 [2024-07-23 18:37:06.856451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:06.908 [2024-07-23 18:37:06.856459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:06.908 [2024-07-23 18:37:06.856470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:06.909 [2024-07-23 18:37:06.856477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:06.909 [2024-07-23 18:37:06.856486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:06.909 [2024-07-23 18:37:06.856494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:06.909 [2024-07-23 18:37:06.856504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:06.909 [2024-07-23 18:37:06.856511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:06.909 [2024-07-23 18:37:06.856521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:06.909 [2024-07-23 18:37:06.856529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:06.909 [2024-07-23 18:37:06.856539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:06.909 [2024-07-23 18:37:06.856546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:06.909 [2024-07-23 18:37:06.856555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:06.909 [2024-07-23 18:37:06.856562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:06.909 [2024-07-23 18:37:06.856571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:06.909 [2024-07-23 18:37:06.856588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:06.909 [2024-07-23 18:37:06.856602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:06.909 [2024-07-23 18:37:06.856611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:06.909 [2024-07-23 18:37:06.856621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:06.909 [2024-07-23 18:37:06.856628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:06.909 [2024-07-23 18:37:06.856639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:06.909 [2024-07-23 18:37:06.856646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:06.909 [2024-07-23 18:37:06.856656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:06.909 [2024-07-23 18:37:06.856663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:06.909 [2024-07-23 18:37:06.856672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:06.909 [2024-07-23 18:37:06.856679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:06.909 [2024-07-23 18:37:06.856690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:06.909 [2024-07-23 18:37:06.856698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:06.909 [2024-07-23 18:37:06.856708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:06.909 [2024-07-23 18:37:06.856716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:06.909 [2024-07-23 18:37:06.856742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:06.909 [2024-07-23 18:37:06.856774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:06.909 [2024-07-23 18:37:06.856787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:06.909 [2024-07-23 18:37:06.856795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:06.909 [2024-07-23 18:37:06.856804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:06.909 [2024-07-23 18:37:06.856812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:06.909 [2024-07-23 18:37:06.856822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:06.909 [2024-07-23 18:37:06.856829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:06.909 [2024-07-23 18:37:06.856839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:06.909 [2024-07-23 18:37:06.856846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:06.909 [2024-07-23 18:37:06.856855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:06.909 [2024-07-23 18:37:06.856863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:06.909 [2024-07-23 18:37:06.856872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:06.909 [2024-07-23 18:37:06.856880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:06.909 [2024-07-23 18:37:06.856890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:06.909 [2024-07-23 18:37:06.856898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:06.909 [2024-07-23 18:37:06.856907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:06.909 [2024-07-23 18:37:06.856922] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:06.909 [2024-07-23 18:37:06.856934] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e8384e09-63b5-481c-89d9-86ebd1f98094 00:19:06.909 [2024-07-23 18:37:06.856942] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:06.909 [2024-07-23 18:37:06.856951] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:06.909 [2024-07-23 18:37:06.856958] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:06.909 [2024-07-23 18:37:06.856969] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:06.909 [2024-07-23 18:37:06.856976] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:06.909 [2024-07-23 18:37:06.856987] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:06.909 [2024-07-23 18:37:06.856999] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:06.909 [2024-07-23 18:37:06.857007] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:06.909 [2024-07-23 18:37:06.857014] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:06.909 [2024-07-23 18:37:06.857024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.909 [2024-07-23 18:37:06.857042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:06.909 [2024-07-23 18:37:06.857054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.507 ms 00:19:06.909 [2024-07-23 18:37:06.857061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.909 [2024-07-23 18:37:06.859899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.909 [2024-07-23 18:37:06.859918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:06.909 [2024-07-23 18:37:06.859940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.820 ms 00:19:06.909 [2024-07-23 18:37:06.859948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.909 [2024-07-23 18:37:06.860137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.909 [2024-07-23 18:37:06.860145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:06.909 [2024-07-23 18:37:06.860155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.154 ms 00:19:06.909 [2024-07-23 18:37:06.860162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.909 [2024-07-23 18:37:06.870770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:06.909 [2024-07-23 18:37:06.870800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:06.909 [2024-07-23 18:37:06.870813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:06.909 [2024-07-23 18:37:06.870839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.909 [2024-07-23 18:37:06.870896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:06.909 [2024-07-23 18:37:06.870904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:06.909 [2024-07-23 18:37:06.870914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:06.909 [2024-07-23 18:37:06.870921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.909 [2024-07-23 18:37:06.871005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:06.909 [2024-07-23 18:37:06.871017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:06.909 [2024-07-23 18:37:06.871030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:06.909 [2024-07-23 18:37:06.871037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.909 [2024-07-23 18:37:06.871062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:06.909 [2024-07-23 18:37:06.871069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:06.909 [2024-07-23 18:37:06.871079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:06.909 [2024-07-23 18:37:06.871086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.909 [2024-07-23 18:37:06.896355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:06.909 [2024-07-23 18:37:06.896410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:06.909 [2024-07-23 18:37:06.896424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:06.909 [2024-07-23 18:37:06.896449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.909 [2024-07-23 18:37:06.909451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:06.909 [2024-07-23 18:37:06.909490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:06.909 [2024-07-23 18:37:06.909504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:06.909 [2024-07-23 18:37:06.909529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.909 [2024-07-23 18:37:06.909625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:06.909 [2024-07-23 18:37:06.909636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:06.909 [2024-07-23 18:37:06.909650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:06.910 [2024-07-23 18:37:06.909658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.910 [2024-07-23 18:37:06.909729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:06.910 [2024-07-23 18:37:06.909741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:06.910 [2024-07-23 18:37:06.909752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:06.910 [2024-07-23 18:37:06.909759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.910 [2024-07-23 18:37:06.909850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:06.910 [2024-07-23 18:37:06.909862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:06.910 [2024-07-23 18:37:06.909872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:06.910 [2024-07-23 18:37:06.909880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.910 [2024-07-23 18:37:06.909922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:06.910 [2024-07-23 18:37:06.909932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:06.910 [2024-07-23 18:37:06.909945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:06.910 [2024-07-23 18:37:06.909954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.910 [2024-07-23 18:37:06.910006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:06.910 [2024-07-23 18:37:06.910020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:06.910 [2024-07-23 18:37:06.910033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:06.910 [2024-07-23 18:37:06.910041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.910 [2024-07-23 18:37:06.910093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:06.910 [2024-07-23 18:37:06.910106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:06.910 [2024-07-23 18:37:06.910115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:06.910 [2024-07-23 18:37:06.910123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.910 [2024-07-23 18:37:06.910275] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 77.917 ms, result 0 00:19:06.910 true 00:19:06.910 18:37:06 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 90248 00:19:06.910 18:37:06 ftl.ftl_restore -- common/autotest_common.sh@946 -- # '[' -z 90248 ']' 00:19:06.910 18:37:06 ftl.ftl_restore -- common/autotest_common.sh@950 -- # kill -0 90248 00:19:06.910 18:37:06 ftl.ftl_restore -- common/autotest_common.sh@951 -- # uname 00:19:06.910 18:37:06 ftl.ftl_restore -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:06.910 18:37:06 ftl.ftl_restore -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 90248 00:19:07.170 18:37:06 ftl.ftl_restore -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:07.170 18:37:06 ftl.ftl_restore -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:07.170 18:37:06 ftl.ftl_restore -- common/autotest_common.sh@964 -- # echo 'killing process with pid 90248' 00:19:07.170 killing process with pid 90248 00:19:07.170 18:37:06 ftl.ftl_restore -- common/autotest_common.sh@965 -- # kill 90248 00:19:07.170 18:37:06 ftl.ftl_restore -- common/autotest_common.sh@970 -- # wait 90248 00:19:12.451 18:37:11 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:19:14.999 262144+0 records in 00:19:14.999 262144+0 records out 00:19:14.999 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.31305 s, 324 MB/s 00:19:14.999 18:37:14 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:19:16.907 18:37:16 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:16.907 [2024-07-23 18:37:16.535128] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:19:16.907 [2024-07-23 18:37:16.535246] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90489 ] 00:19:16.907 [2024-07-23 18:37:16.682124] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:16.907 [2024-07-23 18:37:16.750760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:16.907 [2024-07-23 18:37:16.901500] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:16.907 [2024-07-23 18:37:16.901614] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:17.168 [2024-07-23 18:37:17.052128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.168 [2024-07-23 18:37:17.052196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:17.168 [2024-07-23 18:37:17.052226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:17.168 [2024-07-23 18:37:17.052248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.168 [2024-07-23 18:37:17.052310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.168 [2024-07-23 18:37:17.052334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:17.168 [2024-07-23 18:37:17.052347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:19:17.168 [2024-07-23 18:37:17.052362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.168 [2024-07-23 18:37:17.052391] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:17.168 [2024-07-23 18:37:17.052634] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:17.168 [2024-07-23 18:37:17.052663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.168 [2024-07-23 18:37:17.052679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:17.168 [2024-07-23 18:37:17.052692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.279 ms 00:19:17.168 [2024-07-23 18:37:17.052703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.168 [2024-07-23 18:37:17.055124] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:17.168 [2024-07-23 18:37:17.058697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.168 [2024-07-23 18:37:17.058732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:17.168 [2024-07-23 18:37:17.058753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.582 ms 00:19:17.168 [2024-07-23 18:37:17.058764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.168 [2024-07-23 18:37:17.058838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.168 [2024-07-23 18:37:17.058859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:17.168 [2024-07-23 18:37:17.058871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:19:17.168 [2024-07-23 18:37:17.058886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.168 [2024-07-23 18:37:17.071080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.168 [2024-07-23 18:37:17.071113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:17.168 [2024-07-23 18:37:17.071127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.152 ms 00:19:17.168 [2024-07-23 18:37:17.071153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.168 [2024-07-23 18:37:17.071262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.168 [2024-07-23 18:37:17.071283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:17.169 [2024-07-23 18:37:17.071297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:19:17.169 [2024-07-23 18:37:17.071309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.169 [2024-07-23 18:37:17.071383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.169 [2024-07-23 18:37:17.071400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:17.169 [2024-07-23 18:37:17.071425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:19:17.169 [2024-07-23 18:37:17.071437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.169 [2024-07-23 18:37:17.071475] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:17.169 [2024-07-23 18:37:17.074140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.169 [2024-07-23 18:37:17.074166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:17.169 [2024-07-23 18:37:17.074178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.685 ms 00:19:17.169 [2024-07-23 18:37:17.074189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.169 [2024-07-23 18:37:17.074236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.169 [2024-07-23 18:37:17.074254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:17.169 [2024-07-23 18:37:17.074272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:17.169 [2024-07-23 18:37:17.074285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.169 [2024-07-23 18:37:17.074314] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:17.169 [2024-07-23 18:37:17.074362] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:17.169 [2024-07-23 18:37:17.074412] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:17.169 [2024-07-23 18:37:17.074440] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:19:17.169 [2024-07-23 18:37:17.074539] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:17.169 [2024-07-23 18:37:17.074566] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:17.169 [2024-07-23 18:37:17.074593] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:19:17.169 [2024-07-23 18:37:17.074608] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:17.169 [2024-07-23 18:37:17.074621] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:17.169 [2024-07-23 18:37:17.074633] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:17.169 [2024-07-23 18:37:17.074645] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:17.169 [2024-07-23 18:37:17.074656] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:17.169 [2024-07-23 18:37:17.074667] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:17.169 [2024-07-23 18:37:17.074677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.169 [2024-07-23 18:37:17.074687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:17.169 [2024-07-23 18:37:17.074699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.367 ms 00:19:17.169 [2024-07-23 18:37:17.074712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.169 [2024-07-23 18:37:17.074797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.169 [2024-07-23 18:37:17.074817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:17.169 [2024-07-23 18:37:17.074832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:19:17.169 [2024-07-23 18:37:17.074841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.169 [2024-07-23 18:37:17.074968] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:17.169 [2024-07-23 18:37:17.074991] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:17.169 [2024-07-23 18:37:17.075003] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:17.169 [2024-07-23 18:37:17.075013] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:17.169 [2024-07-23 18:37:17.075029] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:17.169 [2024-07-23 18:37:17.075039] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:17.169 [2024-07-23 18:37:17.075050] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:17.169 [2024-07-23 18:37:17.075060] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:17.169 [2024-07-23 18:37:17.075074] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:17.169 [2024-07-23 18:37:17.075084] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:17.169 [2024-07-23 18:37:17.075095] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:17.169 [2024-07-23 18:37:17.075105] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:17.169 [2024-07-23 18:37:17.075114] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:17.169 [2024-07-23 18:37:17.075123] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:17.169 [2024-07-23 18:37:17.075132] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:17.169 [2024-07-23 18:37:17.075141] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:17.169 [2024-07-23 18:37:17.075154] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:17.169 [2024-07-23 18:37:17.075165] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:17.169 [2024-07-23 18:37:17.075175] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:17.169 [2024-07-23 18:37:17.075185] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:17.169 [2024-07-23 18:37:17.075195] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:17.169 [2024-07-23 18:37:17.075204] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:17.169 [2024-07-23 18:37:17.075214] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:17.169 [2024-07-23 18:37:17.075224] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:17.169 [2024-07-23 18:37:17.075233] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:17.169 [2024-07-23 18:37:17.075242] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:17.169 [2024-07-23 18:37:17.075258] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:17.169 [2024-07-23 18:37:17.075266] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:17.169 [2024-07-23 18:37:17.075276] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:17.169 [2024-07-23 18:37:17.075286] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:17.169 [2024-07-23 18:37:17.075294] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:17.169 [2024-07-23 18:37:17.075304] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:17.169 [2024-07-23 18:37:17.075324] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:17.169 [2024-07-23 18:37:17.075335] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:17.169 [2024-07-23 18:37:17.075349] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:17.169 [2024-07-23 18:37:17.075359] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:17.169 [2024-07-23 18:37:17.075367] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:17.169 [2024-07-23 18:37:17.075375] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:17.169 [2024-07-23 18:37:17.075385] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:17.169 [2024-07-23 18:37:17.075395] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:17.169 [2024-07-23 18:37:17.075404] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:17.169 [2024-07-23 18:37:17.075416] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:17.169 [2024-07-23 18:37:17.075429] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:17.169 [2024-07-23 18:37:17.075453] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:17.169 [2024-07-23 18:37:17.075464] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:17.169 [2024-07-23 18:37:17.075477] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:17.169 [2024-07-23 18:37:17.075489] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:17.169 [2024-07-23 18:37:17.075499] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:17.169 [2024-07-23 18:37:17.075517] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:17.169 [2024-07-23 18:37:17.075529] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:17.169 [2024-07-23 18:37:17.075537] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:17.169 [2024-07-23 18:37:17.075549] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:17.169 [2024-07-23 18:37:17.075562] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:17.169 [2024-07-23 18:37:17.075595] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:17.169 [2024-07-23 18:37:17.075629] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:17.169 [2024-07-23 18:37:17.075660] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:17.169 [2024-07-23 18:37:17.075673] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:17.169 [2024-07-23 18:37:17.075684] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:17.169 [2024-07-23 18:37:17.075695] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:17.169 [2024-07-23 18:37:17.075706] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:17.170 [2024-07-23 18:37:17.075718] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:17.170 [2024-07-23 18:37:17.075729] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:17.170 [2024-07-23 18:37:17.075741] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:17.170 [2024-07-23 18:37:17.075752] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:17.170 [2024-07-23 18:37:17.075766] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:17.170 [2024-07-23 18:37:17.075777] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:17.170 [2024-07-23 18:37:17.075788] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:17.170 [2024-07-23 18:37:17.075799] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:17.170 [2024-07-23 18:37:17.075810] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:17.170 [2024-07-23 18:37:17.075822] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:17.170 [2024-07-23 18:37:17.075843] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:17.170 [2024-07-23 18:37:17.075864] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:17.170 [2024-07-23 18:37:17.075876] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:17.170 [2024-07-23 18:37:17.075899] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:17.170 [2024-07-23 18:37:17.075913] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:17.170 [2024-07-23 18:37:17.075925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.170 [2024-07-23 18:37:17.075938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:17.170 [2024-07-23 18:37:17.075948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.021 ms 00:19:17.170 [2024-07-23 18:37:17.075972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.170 [2024-07-23 18:37:17.106973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.170 [2024-07-23 18:37:17.107043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:17.170 [2024-07-23 18:37:17.107077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.968 ms 00:19:17.170 [2024-07-23 18:37:17.107091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.170 [2024-07-23 18:37:17.107203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.170 [2024-07-23 18:37:17.107237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:17.170 [2024-07-23 18:37:17.107260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:19:17.170 [2024-07-23 18:37:17.107281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.170 [2024-07-23 18:37:17.123224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.170 [2024-07-23 18:37:17.123296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:17.170 [2024-07-23 18:37:17.123313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.877 ms 00:19:17.170 [2024-07-23 18:37:17.123335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.170 [2024-07-23 18:37:17.123383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.170 [2024-07-23 18:37:17.123400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:17.170 [2024-07-23 18:37:17.123414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:17.170 [2024-07-23 18:37:17.123434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.170 [2024-07-23 18:37:17.124233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.170 [2024-07-23 18:37:17.124255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:17.170 [2024-07-23 18:37:17.124283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.740 ms 00:19:17.170 [2024-07-23 18:37:17.124303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.170 [2024-07-23 18:37:17.124445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.170 [2024-07-23 18:37:17.124465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:17.170 [2024-07-23 18:37:17.124486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:19:17.170 [2024-07-23 18:37:17.124498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.170 [2024-07-23 18:37:17.134262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.170 [2024-07-23 18:37:17.134296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:17.170 [2024-07-23 18:37:17.134310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.740 ms 00:19:17.170 [2024-07-23 18:37:17.134350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.170 [2024-07-23 18:37:17.137962] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:19:17.170 [2024-07-23 18:37:17.137997] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:17.170 [2024-07-23 18:37:17.138026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.170 [2024-07-23 18:37:17.138038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:17.170 [2024-07-23 18:37:17.138062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.556 ms 00:19:17.170 [2024-07-23 18:37:17.138073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.170 [2024-07-23 18:37:17.150682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.170 [2024-07-23 18:37:17.150724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:17.170 [2024-07-23 18:37:17.150739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.582 ms 00:19:17.170 [2024-07-23 18:37:17.150766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.170 [2024-07-23 18:37:17.152544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.170 [2024-07-23 18:37:17.152591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:17.170 [2024-07-23 18:37:17.152606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.730 ms 00:19:17.170 [2024-07-23 18:37:17.152616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.170 [2024-07-23 18:37:17.154046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.170 [2024-07-23 18:37:17.154080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:17.170 [2024-07-23 18:37:17.154092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.390 ms 00:19:17.170 [2024-07-23 18:37:17.154102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.170 [2024-07-23 18:37:17.154380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.170 [2024-07-23 18:37:17.154405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:17.170 [2024-07-23 18:37:17.154419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.208 ms 00:19:17.170 [2024-07-23 18:37:17.154429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.170 [2024-07-23 18:37:17.183460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.170 [2024-07-23 18:37:17.183550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:17.170 [2024-07-23 18:37:17.183580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.060 ms 00:19:17.170 [2024-07-23 18:37:17.183591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.170 [2024-07-23 18:37:17.189886] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:17.170 [2024-07-23 18:37:17.193885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.170 [2024-07-23 18:37:17.193922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:17.170 [2024-07-23 18:37:17.193937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.253 ms 00:19:17.170 [2024-07-23 18:37:17.193963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.170 [2024-07-23 18:37:17.194063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.170 [2024-07-23 18:37:17.194084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:17.170 [2024-07-23 18:37:17.194096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:17.170 [2024-07-23 18:37:17.194107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.170 [2024-07-23 18:37:17.194229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.170 [2024-07-23 18:37:17.194249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:17.170 [2024-07-23 18:37:17.194267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:19:17.170 [2024-07-23 18:37:17.194283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.170 [2024-07-23 18:37:17.194318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.170 [2024-07-23 18:37:17.194335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:17.170 [2024-07-23 18:37:17.194348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:17.170 [2024-07-23 18:37:17.194358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.170 [2024-07-23 18:37:17.194404] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:17.170 [2024-07-23 18:37:17.194428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.170 [2024-07-23 18:37:17.194440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:17.170 [2024-07-23 18:37:17.194464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:19:17.170 [2024-07-23 18:37:17.194493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.170 [2024-07-23 18:37:17.199106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.170 [2024-07-23 18:37:17.199143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:17.170 [2024-07-23 18:37:17.199157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.593 ms 00:19:17.170 [2024-07-23 18:37:17.199168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.171 [2024-07-23 18:37:17.199244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.171 [2024-07-23 18:37:17.199270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:17.171 [2024-07-23 18:37:17.199283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:19:17.171 [2024-07-23 18:37:17.199297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.171 [2024-07-23 18:37:17.200758] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 148.411 ms, result 0 00:19:51.105  Copying: 29/1024 [MB] (29 MBps) Copying: 58/1024 [MB] (29 MBps) Copying: 88/1024 [MB] (29 MBps) Copying: 117/1024 [MB] (29 MBps) Copying: 146/1024 [MB] (28 MBps) Copying: 175/1024 [MB] (28 MBps) Copying: 204/1024 [MB] (29 MBps) Copying: 233/1024 [MB] (29 MBps) Copying: 263/1024 [MB] (29 MBps) Copying: 294/1024 [MB] (30 MBps) Copying: 323/1024 [MB] (29 MBps) Copying: 353/1024 [MB] (29 MBps) Copying: 382/1024 [MB] (29 MBps) Copying: 412/1024 [MB] (29 MBps) Copying: 442/1024 [MB] (30 MBps) Copying: 472/1024 [MB] (30 MBps) Copying: 503/1024 [MB] (30 MBps) Copying: 533/1024 [MB] (30 MBps) Copying: 566/1024 [MB] (32 MBps) Copying: 597/1024 [MB] (31 MBps) Copying: 629/1024 [MB] (31 MBps) Copying: 662/1024 [MB] (32 MBps) Copying: 694/1024 [MB] (32 MBps) Copying: 726/1024 [MB] (31 MBps) Copying: 757/1024 [MB] (31 MBps) Copying: 787/1024 [MB] (30 MBps) Copying: 818/1024 [MB] (30 MBps) Copying: 848/1024 [MB] (29 MBps) Copying: 878/1024 [MB] (30 MBps) Copying: 909/1024 [MB] (30 MBps) Copying: 939/1024 [MB] (30 MBps) Copying: 970/1024 [MB] (30 MBps) Copying: 1000/1024 [MB] (30 MBps) Copying: 1024/1024 [MB] (average 30 MBps)[2024-07-23 18:37:50.919553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.105 [2024-07-23 18:37:50.919647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:51.105 [2024-07-23 18:37:50.919667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:51.105 [2024-07-23 18:37:50.919683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.105 [2024-07-23 18:37:50.919709] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:51.105 [2024-07-23 18:37:50.921036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.105 [2024-07-23 18:37:50.921064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:51.105 [2024-07-23 18:37:50.921073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.289 ms 00:19:51.105 [2024-07-23 18:37:50.921088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.105 [2024-07-23 18:37:50.923376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.105 [2024-07-23 18:37:50.923417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:51.105 [2024-07-23 18:37:50.923428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.272 ms 00:19:51.105 [2024-07-23 18:37:50.923449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.105 [2024-07-23 18:37:50.940316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.105 [2024-07-23 18:37:50.940359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:51.105 [2024-07-23 18:37:50.940371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.876 ms 00:19:51.105 [2024-07-23 18:37:50.940379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.105 [2024-07-23 18:37:50.945306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.105 [2024-07-23 18:37:50.945335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:51.105 [2024-07-23 18:37:50.945344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.903 ms 00:19:51.105 [2024-07-23 18:37:50.945373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.105 [2024-07-23 18:37:50.946773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.105 [2024-07-23 18:37:50.946807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:51.105 [2024-07-23 18:37:50.946816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.347 ms 00:19:51.105 [2024-07-23 18:37:50.946823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.105 [2024-07-23 18:37:50.951189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.105 [2024-07-23 18:37:50.951223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:51.105 [2024-07-23 18:37:50.951233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.350 ms 00:19:51.105 [2024-07-23 18:37:50.951240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.105 [2024-07-23 18:37:50.951366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.105 [2024-07-23 18:37:50.951382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:51.106 [2024-07-23 18:37:50.951391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:19:51.106 [2024-07-23 18:37:50.951399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.106 [2024-07-23 18:37:50.953480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.106 [2024-07-23 18:37:50.953512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:51.106 [2024-07-23 18:37:50.953521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.067 ms 00:19:51.106 [2024-07-23 18:37:50.953528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.106 [2024-07-23 18:37:50.955129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.106 [2024-07-23 18:37:50.955160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:51.106 [2024-07-23 18:37:50.955169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.582 ms 00:19:51.106 [2024-07-23 18:37:50.955176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.106 [2024-07-23 18:37:50.956410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.106 [2024-07-23 18:37:50.956441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:51.106 [2024-07-23 18:37:50.956450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.214 ms 00:19:51.106 [2024-07-23 18:37:50.956457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.106 [2024-07-23 18:37:50.957518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.106 [2024-07-23 18:37:50.957565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:51.106 [2024-07-23 18:37:50.957593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.019 ms 00:19:51.106 [2024-07-23 18:37:50.957601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.106 [2024-07-23 18:37:50.957626] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:51.106 [2024-07-23 18:37:50.957640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:51.106 [2024-07-23 18:37:50.957650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:51.106 [2024-07-23 18:37:50.957658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:51.106 [2024-07-23 18:37:50.957665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:51.106 [2024-07-23 18:37:50.957673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:51.106 [2024-07-23 18:37:50.957682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:51.106 [2024-07-23 18:37:50.957690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:51.106 [2024-07-23 18:37:50.957698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:51.106 [2024-07-23 18:37:50.957706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:51.106 [2024-07-23 18:37:50.957714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:51.106 [2024-07-23 18:37:50.957721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:51.106 [2024-07-23 18:37:50.957728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:51.106 [2024-07-23 18:37:50.957736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:51.106 [2024-07-23 18:37:50.957743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:51.106 [2024-07-23 18:37:50.957751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:51.106 [2024-07-23 18:37:50.957759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:51.106 [2024-07-23 18:37:50.957766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:51.106 [2024-07-23 18:37:50.957773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:51.106 [2024-07-23 18:37:50.957780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:51.106 [2024-07-23 18:37:50.957788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:51.106 [2024-07-23 18:37:50.957796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:51.106 [2024-07-23 18:37:50.957804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:51.106 [2024-07-23 18:37:50.957811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:51.106 [2024-07-23 18:37:50.957818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:51.106 [2024-07-23 18:37:50.957826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:51.106 [2024-07-23 18:37:50.957833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:51.106 [2024-07-23 18:37:50.957840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:51.106 [2024-07-23 18:37:50.957847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:51.106 [2024-07-23 18:37:50.957854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:51.106 [2024-07-23 18:37:50.957862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:51.106 [2024-07-23 18:37:50.957869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:51.106 [2024-07-23 18:37:50.957877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:51.106 [2024-07-23 18:37:50.957884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:51.106 [2024-07-23 18:37:50.957891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:51.106 [2024-07-23 18:37:50.957899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:51.106 [2024-07-23 18:37:50.957906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:51.106 [2024-07-23 18:37:50.957914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:51.106 [2024-07-23 18:37:50.957921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:51.106 [2024-07-23 18:37:50.957929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:51.106 [2024-07-23 18:37:50.957937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:51.106 [2024-07-23 18:37:50.957944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:51.106 [2024-07-23 18:37:50.957952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:51.106 [2024-07-23 18:37:50.957958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:51.106 [2024-07-23 18:37:50.957965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:51.106 [2024-07-23 18:37:50.957973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:51.106 [2024-07-23 18:37:50.957980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:51.106 [2024-07-23 18:37:50.957988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:51.106 [2024-07-23 18:37:50.957997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:51.106 [2024-07-23 18:37:50.958005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:51.106 [2024-07-23 18:37:50.958013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:51.106 [2024-07-23 18:37:50.958021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:51.106 [2024-07-23 18:37:50.958030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:51.106 [2024-07-23 18:37:50.958038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:51.106 [2024-07-23 18:37:50.958046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:51.106 [2024-07-23 18:37:50.958053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:51.106 [2024-07-23 18:37:50.958061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:51.106 [2024-07-23 18:37:50.958085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:51.106 [2024-07-23 18:37:50.958093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:51.106 [2024-07-23 18:37:50.958101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:51.106 [2024-07-23 18:37:50.958110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:51.106 [2024-07-23 18:37:50.958118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:51.106 [2024-07-23 18:37:50.958126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:51.106 [2024-07-23 18:37:50.958133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:51.106 [2024-07-23 18:37:50.958143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:51.106 [2024-07-23 18:37:50.958151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:51.106 [2024-07-23 18:37:50.958158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:51.106 [2024-07-23 18:37:50.958166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:51.106 [2024-07-23 18:37:50.958173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:51.106 [2024-07-23 18:37:50.958180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:51.106 [2024-07-23 18:37:50.958188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:51.106 [2024-07-23 18:37:50.958195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:51.106 [2024-07-23 18:37:50.958202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:51.106 [2024-07-23 18:37:50.958209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:51.106 [2024-07-23 18:37:50.958216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:51.107 [2024-07-23 18:37:50.958225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:51.107 [2024-07-23 18:37:50.958233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:51.107 [2024-07-23 18:37:50.958241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:51.107 [2024-07-23 18:37:50.958248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:51.107 [2024-07-23 18:37:50.958257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:51.107 [2024-07-23 18:37:50.958264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:51.107 [2024-07-23 18:37:50.958271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:51.107 [2024-07-23 18:37:50.958278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:51.107 [2024-07-23 18:37:50.958285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:51.107 [2024-07-23 18:37:50.958293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:51.107 [2024-07-23 18:37:50.958300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:51.107 [2024-07-23 18:37:50.958308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:51.107 [2024-07-23 18:37:50.958315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:51.107 [2024-07-23 18:37:50.958324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:51.107 [2024-07-23 18:37:50.958332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:51.107 [2024-07-23 18:37:50.958340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:51.107 [2024-07-23 18:37:50.958348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:51.107 [2024-07-23 18:37:50.958368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:51.107 [2024-07-23 18:37:50.958376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:51.107 [2024-07-23 18:37:50.958383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:51.107 [2024-07-23 18:37:50.958390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:51.107 [2024-07-23 18:37:50.958397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:51.107 [2024-07-23 18:37:50.958405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:51.107 [2024-07-23 18:37:50.958413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:51.107 [2024-07-23 18:37:50.958420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:51.107 [2024-07-23 18:37:50.958427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:51.107 [2024-07-23 18:37:50.958442] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:51.107 [2024-07-23 18:37:50.958450] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e8384e09-63b5-481c-89d9-86ebd1f98094 00:19:51.107 [2024-07-23 18:37:50.958458] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:51.107 [2024-07-23 18:37:50.958466] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:51.107 [2024-07-23 18:37:50.958481] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:51.107 [2024-07-23 18:37:50.958489] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:51.107 [2024-07-23 18:37:50.958495] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:51.107 [2024-07-23 18:37:50.958503] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:51.107 [2024-07-23 18:37:50.958511] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:51.107 [2024-07-23 18:37:50.958518] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:51.107 [2024-07-23 18:37:50.958524] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:51.107 [2024-07-23 18:37:50.958532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.107 [2024-07-23 18:37:50.958550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:51.107 [2024-07-23 18:37:50.958558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.909 ms 00:19:51.107 [2024-07-23 18:37:50.958565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.107 [2024-07-23 18:37:50.961392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.107 [2024-07-23 18:37:50.961410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:51.107 [2024-07-23 18:37:50.961428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.817 ms 00:19:51.107 [2024-07-23 18:37:50.961436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.107 [2024-07-23 18:37:50.961625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.107 [2024-07-23 18:37:50.961638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:51.107 [2024-07-23 18:37:50.961647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.173 ms 00:19:51.107 [2024-07-23 18:37:50.961655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.107 [2024-07-23 18:37:50.970875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.107 [2024-07-23 18:37:50.970903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:51.107 [2024-07-23 18:37:50.970913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.107 [2024-07-23 18:37:50.970922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.107 [2024-07-23 18:37:50.970978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.107 [2024-07-23 18:37:50.970987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:51.107 [2024-07-23 18:37:50.970994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.107 [2024-07-23 18:37:50.971001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.107 [2024-07-23 18:37:50.971057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.107 [2024-07-23 18:37:50.971070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:51.107 [2024-07-23 18:37:50.971078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.107 [2024-07-23 18:37:50.971086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.107 [2024-07-23 18:37:50.971100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.107 [2024-07-23 18:37:50.971112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:51.107 [2024-07-23 18:37:50.971120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.107 [2024-07-23 18:37:50.971127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.107 [2024-07-23 18:37:50.993957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.107 [2024-07-23 18:37:50.994003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:51.107 [2024-07-23 18:37:50.994015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.107 [2024-07-23 18:37:50.994023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.107 [2024-07-23 18:37:51.007806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.107 [2024-07-23 18:37:51.007851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:51.107 [2024-07-23 18:37:51.007861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.107 [2024-07-23 18:37:51.007869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.107 [2024-07-23 18:37:51.007925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.107 [2024-07-23 18:37:51.007934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:51.107 [2024-07-23 18:37:51.007943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.107 [2024-07-23 18:37:51.007950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.107 [2024-07-23 18:37:51.007985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.107 [2024-07-23 18:37:51.007994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:51.107 [2024-07-23 18:37:51.008007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.107 [2024-07-23 18:37:51.008015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.107 [2024-07-23 18:37:51.008109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.107 [2024-07-23 18:37:51.008124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:51.107 [2024-07-23 18:37:51.008133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.107 [2024-07-23 18:37:51.008140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.107 [2024-07-23 18:37:51.008182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.107 [2024-07-23 18:37:51.008191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:51.107 [2024-07-23 18:37:51.008199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.107 [2024-07-23 18:37:51.008211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.107 [2024-07-23 18:37:51.008252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.107 [2024-07-23 18:37:51.008260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:51.107 [2024-07-23 18:37:51.008268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.107 [2024-07-23 18:37:51.008275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.107 [2024-07-23 18:37:51.008321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.107 [2024-07-23 18:37:51.008330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:51.107 [2024-07-23 18:37:51.008342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.107 [2024-07-23 18:37:51.008349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.107 [2024-07-23 18:37:51.008490] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 89.067 ms, result 0 00:19:51.677 00:19:51.677 00:19:51.677 18:37:51 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:19:51.677 [2024-07-23 18:37:51.713288] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:19:51.677 [2024-07-23 18:37:51.713476] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90846 ] 00:19:51.936 [2024-07-23 18:37:51.875272] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:51.936 [2024-07-23 18:37:51.944033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:52.196 [2024-07-23 18:37:52.094508] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:52.196 [2024-07-23 18:37:52.094592] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:52.196 [2024-07-23 18:37:52.245111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.196 [2024-07-23 18:37:52.245170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:52.196 [2024-07-23 18:37:52.245192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:52.196 [2024-07-23 18:37:52.245200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.196 [2024-07-23 18:37:52.245250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.196 [2024-07-23 18:37:52.245259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:52.196 [2024-07-23 18:37:52.245267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:19:52.196 [2024-07-23 18:37:52.245280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.196 [2024-07-23 18:37:52.245298] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:52.196 [2024-07-23 18:37:52.245501] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:52.196 [2024-07-23 18:37:52.245525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.196 [2024-07-23 18:37:52.245536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:52.196 [2024-07-23 18:37:52.245544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.232 ms 00:19:52.196 [2024-07-23 18:37:52.245551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.196 [2024-07-23 18:37:52.248235] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:52.457 [2024-07-23 18:37:52.251859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.457 [2024-07-23 18:37:52.251894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:52.457 [2024-07-23 18:37:52.251911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.634 ms 00:19:52.457 [2024-07-23 18:37:52.251920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.457 [2024-07-23 18:37:52.251981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.457 [2024-07-23 18:37:52.251991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:52.457 [2024-07-23 18:37:52.252010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:19:52.457 [2024-07-23 18:37:52.252017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.457 [2024-07-23 18:37:52.264213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.457 [2024-07-23 18:37:52.264245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:52.457 [2024-07-23 18:37:52.264255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.162 ms 00:19:52.457 [2024-07-23 18:37:52.264263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.457 [2024-07-23 18:37:52.264437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.457 [2024-07-23 18:37:52.264456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:52.457 [2024-07-23 18:37:52.264465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.152 ms 00:19:52.457 [2024-07-23 18:37:52.264472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.457 [2024-07-23 18:37:52.264536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.457 [2024-07-23 18:37:52.264548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:52.457 [2024-07-23 18:37:52.264564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:19:52.457 [2024-07-23 18:37:52.264586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.457 [2024-07-23 18:37:52.264621] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:52.457 [2024-07-23 18:37:52.267230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.457 [2024-07-23 18:37:52.267253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:52.457 [2024-07-23 18:37:52.267272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.631 ms 00:19:52.457 [2024-07-23 18:37:52.267280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.457 [2024-07-23 18:37:52.267321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.457 [2024-07-23 18:37:52.267337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:52.457 [2024-07-23 18:37:52.267348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:52.457 [2024-07-23 18:37:52.267355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.457 [2024-07-23 18:37:52.267377] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:52.457 [2024-07-23 18:37:52.267400] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:52.457 [2024-07-23 18:37:52.267440] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:52.457 [2024-07-23 18:37:52.267462] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:19:52.457 [2024-07-23 18:37:52.267566] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:52.457 [2024-07-23 18:37:52.267597] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:52.457 [2024-07-23 18:37:52.267609] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:19:52.457 [2024-07-23 18:37:52.267620] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:52.457 [2024-07-23 18:37:52.267637] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:52.457 [2024-07-23 18:37:52.267646] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:52.457 [2024-07-23 18:37:52.267654] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:52.457 [2024-07-23 18:37:52.267662] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:52.457 [2024-07-23 18:37:52.267670] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:52.457 [2024-07-23 18:37:52.267679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.457 [2024-07-23 18:37:52.267687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:52.457 [2024-07-23 18:37:52.267695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.305 ms 00:19:52.457 [2024-07-23 18:37:52.267705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.457 [2024-07-23 18:37:52.267773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.457 [2024-07-23 18:37:52.267787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:52.457 [2024-07-23 18:37:52.267795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:19:52.457 [2024-07-23 18:37:52.267804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.457 [2024-07-23 18:37:52.267901] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:52.457 [2024-07-23 18:37:52.267916] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:52.457 [2024-07-23 18:37:52.267925] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:52.457 [2024-07-23 18:37:52.267933] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:52.457 [2024-07-23 18:37:52.267946] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:52.457 [2024-07-23 18:37:52.267953] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:52.457 [2024-07-23 18:37:52.267961] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:52.457 [2024-07-23 18:37:52.267968] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:52.457 [2024-07-23 18:37:52.267975] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:52.457 [2024-07-23 18:37:52.267984] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:52.457 [2024-07-23 18:37:52.267991] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:52.457 [2024-07-23 18:37:52.267998] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:52.457 [2024-07-23 18:37:52.268004] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:52.457 [2024-07-23 18:37:52.268011] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:52.457 [2024-07-23 18:37:52.268018] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:52.457 [2024-07-23 18:37:52.268024] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:52.457 [2024-07-23 18:37:52.268034] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:52.457 [2024-07-23 18:37:52.268043] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:52.457 [2024-07-23 18:37:52.268050] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:52.457 [2024-07-23 18:37:52.268057] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:52.457 [2024-07-23 18:37:52.268063] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:52.457 [2024-07-23 18:37:52.268070] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:52.457 [2024-07-23 18:37:52.268076] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:52.457 [2024-07-23 18:37:52.268083] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:52.457 [2024-07-23 18:37:52.268089] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:52.457 [2024-07-23 18:37:52.268096] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:52.457 [2024-07-23 18:37:52.268103] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:52.457 [2024-07-23 18:37:52.268110] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:52.457 [2024-07-23 18:37:52.268117] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:52.457 [2024-07-23 18:37:52.268123] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:52.457 [2024-07-23 18:37:52.268129] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:52.457 [2024-07-23 18:37:52.268136] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:52.457 [2024-07-23 18:37:52.268148] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:52.458 [2024-07-23 18:37:52.268156] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:52.458 [2024-07-23 18:37:52.268162] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:52.458 [2024-07-23 18:37:52.268169] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:52.458 [2024-07-23 18:37:52.268175] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:52.458 [2024-07-23 18:37:52.268181] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:52.458 [2024-07-23 18:37:52.268188] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:52.458 [2024-07-23 18:37:52.268195] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:52.458 [2024-07-23 18:37:52.268201] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:52.458 [2024-07-23 18:37:52.268210] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:52.458 [2024-07-23 18:37:52.268217] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:52.458 [2024-07-23 18:37:52.268223] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:52.458 [2024-07-23 18:37:52.268231] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:52.458 [2024-07-23 18:37:52.268237] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:52.458 [2024-07-23 18:37:52.268244] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:52.458 [2024-07-23 18:37:52.268252] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:52.458 [2024-07-23 18:37:52.268261] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:52.458 [2024-07-23 18:37:52.268269] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:52.458 [2024-07-23 18:37:52.268276] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:52.458 [2024-07-23 18:37:52.268283] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:52.458 [2024-07-23 18:37:52.268290] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:52.458 [2024-07-23 18:37:52.268298] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:52.458 [2024-07-23 18:37:52.268307] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:52.458 [2024-07-23 18:37:52.268316] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:52.458 [2024-07-23 18:37:52.268323] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:52.458 [2024-07-23 18:37:52.268331] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:52.458 [2024-07-23 18:37:52.268338] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:52.458 [2024-07-23 18:37:52.268345] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:52.458 [2024-07-23 18:37:52.268353] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:52.458 [2024-07-23 18:37:52.268362] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:52.458 [2024-07-23 18:37:52.268369] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:52.458 [2024-07-23 18:37:52.268376] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:52.458 [2024-07-23 18:37:52.268389] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:52.458 [2024-07-23 18:37:52.268396] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:52.458 [2024-07-23 18:37:52.268403] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:52.458 [2024-07-23 18:37:52.268410] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:52.458 [2024-07-23 18:37:52.268418] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:52.458 [2024-07-23 18:37:52.268425] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:52.458 [2024-07-23 18:37:52.268434] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:52.458 [2024-07-23 18:37:52.268442] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:52.458 [2024-07-23 18:37:52.268450] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:52.458 [2024-07-23 18:37:52.268473] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:52.458 [2024-07-23 18:37:52.268485] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:52.458 [2024-07-23 18:37:52.268494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.458 [2024-07-23 18:37:52.268503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:52.458 [2024-07-23 18:37:52.268511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.650 ms 00:19:52.458 [2024-07-23 18:37:52.268524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.458 [2024-07-23 18:37:52.301050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.458 [2024-07-23 18:37:52.301091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:52.458 [2024-07-23 18:37:52.301106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.492 ms 00:19:52.458 [2024-07-23 18:37:52.301117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.458 [2024-07-23 18:37:52.301223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.458 [2024-07-23 18:37:52.301234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:52.458 [2024-07-23 18:37:52.301244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:19:52.458 [2024-07-23 18:37:52.301258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.458 [2024-07-23 18:37:52.317452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.458 [2024-07-23 18:37:52.317497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:52.458 [2024-07-23 18:37:52.317508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.157 ms 00:19:52.458 [2024-07-23 18:37:52.317516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.458 [2024-07-23 18:37:52.317558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.458 [2024-07-23 18:37:52.317577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:52.458 [2024-07-23 18:37:52.317586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:52.458 [2024-07-23 18:37:52.317599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.458 [2024-07-23 18:37:52.318370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.458 [2024-07-23 18:37:52.318387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:52.458 [2024-07-23 18:37:52.318397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.727 ms 00:19:52.458 [2024-07-23 18:37:52.318414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.458 [2024-07-23 18:37:52.318535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.458 [2024-07-23 18:37:52.318551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:52.458 [2024-07-23 18:37:52.318560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:19:52.458 [2024-07-23 18:37:52.318577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.458 [2024-07-23 18:37:52.328331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.458 [2024-07-23 18:37:52.328364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:52.458 [2024-07-23 18:37:52.328375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.741 ms 00:19:52.458 [2024-07-23 18:37:52.328384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.458 [2024-07-23 18:37:52.331987] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:52.458 [2024-07-23 18:37:52.332018] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:52.458 [2024-07-23 18:37:52.332034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.458 [2024-07-23 18:37:52.332043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:52.458 [2024-07-23 18:37:52.332051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.541 ms 00:19:52.458 [2024-07-23 18:37:52.332059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.458 [2024-07-23 18:37:52.344831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.458 [2024-07-23 18:37:52.344865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:52.458 [2024-07-23 18:37:52.344877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.758 ms 00:19:52.458 [2024-07-23 18:37:52.344884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.458 [2024-07-23 18:37:52.346738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.458 [2024-07-23 18:37:52.346768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:52.458 [2024-07-23 18:37:52.346778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.813 ms 00:19:52.458 [2024-07-23 18:37:52.346784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.458 [2024-07-23 18:37:52.348253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.458 [2024-07-23 18:37:52.348283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:52.458 [2024-07-23 18:37:52.348293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.439 ms 00:19:52.458 [2024-07-23 18:37:52.348300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.458 [2024-07-23 18:37:52.348613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.458 [2024-07-23 18:37:52.348633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:52.458 [2024-07-23 18:37:52.348643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.251 ms 00:19:52.458 [2024-07-23 18:37:52.348650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.458 [2024-07-23 18:37:52.378326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.458 [2024-07-23 18:37:52.378395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:52.458 [2024-07-23 18:37:52.378410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.691 ms 00:19:52.458 [2024-07-23 18:37:52.378419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.458 [2024-07-23 18:37:52.384767] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:52.459 [2024-07-23 18:37:52.388910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.459 [2024-07-23 18:37:52.388939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:52.459 [2024-07-23 18:37:52.388950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.461 ms 00:19:52.459 [2024-07-23 18:37:52.388958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.459 [2024-07-23 18:37:52.389038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.459 [2024-07-23 18:37:52.389047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:52.459 [2024-07-23 18:37:52.389055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:52.459 [2024-07-23 18:37:52.389066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.459 [2024-07-23 18:37:52.389150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.459 [2024-07-23 18:37:52.389166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:52.459 [2024-07-23 18:37:52.389179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:19:52.459 [2024-07-23 18:37:52.389186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.459 [2024-07-23 18:37:52.389208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.459 [2024-07-23 18:37:52.389216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:52.459 [2024-07-23 18:37:52.389224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:52.459 [2024-07-23 18:37:52.389231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.459 [2024-07-23 18:37:52.389267] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:52.459 [2024-07-23 18:37:52.389277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.459 [2024-07-23 18:37:52.389284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:52.459 [2024-07-23 18:37:52.389306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:52.459 [2024-07-23 18:37:52.389314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.459 [2024-07-23 18:37:52.394032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.459 [2024-07-23 18:37:52.394063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:52.459 [2024-07-23 18:37:52.394074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.709 ms 00:19:52.459 [2024-07-23 18:37:52.394082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.459 [2024-07-23 18:37:52.394146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.459 [2024-07-23 18:37:52.394155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:52.459 [2024-07-23 18:37:52.394164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:19:52.459 [2024-07-23 18:37:52.394177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.459 [2024-07-23 18:37:52.395657] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 150.305 ms, result 0 00:20:24.568  Copying: 33/1024 [MB] (33 MBps) Copying: 67/1024 [MB] (33 MBps) Copying: 100/1024 [MB] (33 MBps) Copying: 132/1024 [MB] (32 MBps) Copying: 166/1024 [MB] (33 MBps) Copying: 198/1024 [MB] (32 MBps) Copying: 231/1024 [MB] (32 MBps) Copying: 263/1024 [MB] (32 MBps) Copying: 296/1024 [MB] (32 MBps) Copying: 330/1024 [MB] (33 MBps) Copying: 363/1024 [MB] (32 MBps) Copying: 396/1024 [MB] (32 MBps) Copying: 428/1024 [MB] (31 MBps) Copying: 460/1024 [MB] (32 MBps) Copying: 492/1024 [MB] (32 MBps) Copying: 526/1024 [MB] (34 MBps) Copying: 560/1024 [MB] (33 MBps) Copying: 593/1024 [MB] (33 MBps) Copying: 627/1024 [MB] (33 MBps) Copying: 660/1024 [MB] (32 MBps) Copying: 692/1024 [MB] (32 MBps) Copying: 722/1024 [MB] (30 MBps) Copying: 755/1024 [MB] (32 MBps) Copying: 787/1024 [MB] (32 MBps) Copying: 819/1024 [MB] (31 MBps) Copying: 847/1024 [MB] (28 MBps) Copying: 877/1024 [MB] (29 MBps) Copying: 910/1024 [MB] (32 MBps) Copying: 943/1024 [MB] (32 MBps) Copying: 975/1024 [MB] (32 MBps) Copying: 1005/1024 [MB] (29 MBps) Copying: 1024/1024 [MB] (average 32 MBps)[2024-07-23 18:38:24.558970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.568 [2024-07-23 18:38:24.559055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:24.568 [2024-07-23 18:38:24.559078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:20:24.568 [2024-07-23 18:38:24.559095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.568 [2024-07-23 18:38:24.559121] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:24.568 [2024-07-23 18:38:24.560427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.568 [2024-07-23 18:38:24.560450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:24.568 [2024-07-23 18:38:24.560471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.286 ms 00:20:24.568 [2024-07-23 18:38:24.560481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.568 [2024-07-23 18:38:24.560734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.568 [2024-07-23 18:38:24.560752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:24.568 [2024-07-23 18:38:24.560762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.225 ms 00:20:24.568 [2024-07-23 18:38:24.560775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.568 [2024-07-23 18:38:24.563770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.568 [2024-07-23 18:38:24.563812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:24.568 [2024-07-23 18:38:24.563823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.975 ms 00:20:24.568 [2024-07-23 18:38:24.563832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.568 [2024-07-23 18:38:24.569193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.568 [2024-07-23 18:38:24.569226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:24.568 [2024-07-23 18:38:24.569236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.346 ms 00:20:24.568 [2024-07-23 18:38:24.569251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.568 [2024-07-23 18:38:24.570797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.568 [2024-07-23 18:38:24.570833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:24.568 [2024-07-23 18:38:24.570845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.486 ms 00:20:24.568 [2024-07-23 18:38:24.570853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.568 [2024-07-23 18:38:24.575417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.568 [2024-07-23 18:38:24.575466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:24.568 [2024-07-23 18:38:24.575476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.551 ms 00:20:24.568 [2024-07-23 18:38:24.575484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.568 [2024-07-23 18:38:24.575612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.568 [2024-07-23 18:38:24.575625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:24.568 [2024-07-23 18:38:24.575634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:20:24.568 [2024-07-23 18:38:24.575646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.568 [2024-07-23 18:38:24.578379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.568 [2024-07-23 18:38:24.578424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:24.569 [2024-07-23 18:38:24.578437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.721 ms 00:20:24.569 [2024-07-23 18:38:24.578448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.569 [2024-07-23 18:38:24.580142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.569 [2024-07-23 18:38:24.580180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:24.569 [2024-07-23 18:38:24.580191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.674 ms 00:20:24.569 [2024-07-23 18:38:24.580199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.569 [2024-07-23 18:38:24.581324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.569 [2024-07-23 18:38:24.581357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:24.569 [2024-07-23 18:38:24.581367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.108 ms 00:20:24.569 [2024-07-23 18:38:24.581376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.569 [2024-07-23 18:38:24.582383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.569 [2024-07-23 18:38:24.582415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:24.569 [2024-07-23 18:38:24.582425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.957 ms 00:20:24.569 [2024-07-23 18:38:24.582433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.569 [2024-07-23 18:38:24.582449] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:24.569 [2024-07-23 18:38:24.582465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.582477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.582487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.582496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.582506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.582515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.582524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.582533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.582543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.582553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.582563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.582594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.582604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.582614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.582624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.582632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.582641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.582651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.582660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.582670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.582679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.582689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.582700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.582709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.582718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.582727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.582736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.582745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.582754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.582763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.582772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.582783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.582792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.582802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.582811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.582820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.582830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.582839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.582848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.582866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.582875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.582884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.582891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.582898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.582906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.582914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.582922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.582930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.582938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.582945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.582953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.582961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.582970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.582978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.582987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.583014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.583022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.583031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.583039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.583047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.583055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.583065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.583073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.583081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.583090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.583098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.583106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.583114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.583123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.583131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.583157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.583165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.583174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.583183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.583192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.583200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.583209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.583218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.583227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.583236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.583248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.583259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.583270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.583283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.583296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.583307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.583327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.583339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.583350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.583361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.583372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.583384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.583396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.583407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.583418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.583429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.583441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.583452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.583464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.583475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:24.569 [2024-07-23 18:38:24.583495] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:24.569 [2024-07-23 18:38:24.583506] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e8384e09-63b5-481c-89d9-86ebd1f98094 00:20:24.569 [2024-07-23 18:38:24.583519] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:24.569 [2024-07-23 18:38:24.583529] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:24.569 [2024-07-23 18:38:24.583539] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:24.569 [2024-07-23 18:38:24.583550] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:24.569 [2024-07-23 18:38:24.583560] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:24.569 [2024-07-23 18:38:24.583581] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:24.569 [2024-07-23 18:38:24.583608] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:24.569 [2024-07-23 18:38:24.583617] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:24.569 [2024-07-23 18:38:24.583627] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:24.569 [2024-07-23 18:38:24.583638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.569 [2024-07-23 18:38:24.583649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:24.569 [2024-07-23 18:38:24.583660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.192 ms 00:20:24.569 [2024-07-23 18:38:24.583670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.569 [2024-07-23 18:38:24.586680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.569 [2024-07-23 18:38:24.586701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:24.569 [2024-07-23 18:38:24.586721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.991 ms 00:20:24.569 [2024-07-23 18:38:24.586729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.569 [2024-07-23 18:38:24.586908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.569 [2024-07-23 18:38:24.586925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:24.569 [2024-07-23 18:38:24.586933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.156 ms 00:20:24.569 [2024-07-23 18:38:24.586941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.569 [2024-07-23 18:38:24.596605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:24.569 [2024-07-23 18:38:24.596636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:24.569 [2024-07-23 18:38:24.596646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:24.569 [2024-07-23 18:38:24.596662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.569 [2024-07-23 18:38:24.596715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:24.569 [2024-07-23 18:38:24.596724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:24.569 [2024-07-23 18:38:24.596732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:24.569 [2024-07-23 18:38:24.596740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.569 [2024-07-23 18:38:24.596808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:24.570 [2024-07-23 18:38:24.596819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:24.570 [2024-07-23 18:38:24.596827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:24.570 [2024-07-23 18:38:24.596835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.570 [2024-07-23 18:38:24.596872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:24.570 [2024-07-23 18:38:24.596881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:24.570 [2024-07-23 18:38:24.596889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:24.570 [2024-07-23 18:38:24.596895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.829 [2024-07-23 18:38:24.624007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:24.829 [2024-07-23 18:38:24.624078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:24.829 [2024-07-23 18:38:24.624093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:24.829 [2024-07-23 18:38:24.624102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.829 [2024-07-23 18:38:24.640253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:24.829 [2024-07-23 18:38:24.640325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:24.829 [2024-07-23 18:38:24.640342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:24.829 [2024-07-23 18:38:24.640351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.829 [2024-07-23 18:38:24.640425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:24.829 [2024-07-23 18:38:24.640437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:24.829 [2024-07-23 18:38:24.640446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:24.829 [2024-07-23 18:38:24.640455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.829 [2024-07-23 18:38:24.640513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:24.829 [2024-07-23 18:38:24.640535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:24.829 [2024-07-23 18:38:24.640555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:24.829 [2024-07-23 18:38:24.640579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.829 [2024-07-23 18:38:24.640687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:24.829 [2024-07-23 18:38:24.640706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:24.829 [2024-07-23 18:38:24.640715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:24.829 [2024-07-23 18:38:24.640724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.829 [2024-07-23 18:38:24.640776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:24.829 [2024-07-23 18:38:24.640791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:24.829 [2024-07-23 18:38:24.640806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:24.829 [2024-07-23 18:38:24.640816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.829 [2024-07-23 18:38:24.640863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:24.829 [2024-07-23 18:38:24.640873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:24.829 [2024-07-23 18:38:24.640882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:24.829 [2024-07-23 18:38:24.640890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.829 [2024-07-23 18:38:24.640951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:24.829 [2024-07-23 18:38:24.640969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:24.829 [2024-07-23 18:38:24.640977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:24.829 [2024-07-23 18:38:24.640985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.829 [2024-07-23 18:38:24.641134] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 82.292 ms, result 0 00:20:25.397 00:20:25.397 00:20:25.397 18:38:25 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:27.301 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:20:27.301 18:38:27 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:20:27.560 [2024-07-23 18:38:27.386957] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:20:27.560 [2024-07-23 18:38:27.387091] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91216 ] 00:20:27.560 [2024-07-23 18:38:27.536754] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:27.820 [2024-07-23 18:38:27.630858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:27.820 [2024-07-23 18:38:27.790284] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:27.820 [2024-07-23 18:38:27.790377] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:28.080 [2024-07-23 18:38:27.943704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.080 [2024-07-23 18:38:27.943793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:28.080 [2024-07-23 18:38:27.943834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:28.080 [2024-07-23 18:38:27.943855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.080 [2024-07-23 18:38:27.943922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.080 [2024-07-23 18:38:27.943935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:28.080 [2024-07-23 18:38:27.943945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:20:28.080 [2024-07-23 18:38:27.943958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.080 [2024-07-23 18:38:27.943981] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:28.080 [2024-07-23 18:38:27.944306] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:28.080 [2024-07-23 18:38:27.944342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.080 [2024-07-23 18:38:27.944357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:28.080 [2024-07-23 18:38:27.944368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.368 ms 00:20:28.080 [2024-07-23 18:38:27.944376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.080 [2024-07-23 18:38:27.947000] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:28.080 [2024-07-23 18:38:27.950863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.080 [2024-07-23 18:38:27.950902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:28.080 [2024-07-23 18:38:27.950923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.872 ms 00:20:28.080 [2024-07-23 18:38:27.950933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.080 [2024-07-23 18:38:27.951002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.080 [2024-07-23 18:38:27.951014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:28.080 [2024-07-23 18:38:27.951024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:20:28.080 [2024-07-23 18:38:27.951032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.080 [2024-07-23 18:38:27.964312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.080 [2024-07-23 18:38:27.964361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:28.080 [2024-07-23 18:38:27.964390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.234 ms 00:20:28.080 [2024-07-23 18:38:27.964400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.080 [2024-07-23 18:38:27.964541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.080 [2024-07-23 18:38:27.964554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:28.080 [2024-07-23 18:38:27.964563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:20:28.080 [2024-07-23 18:38:27.964571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.080 [2024-07-23 18:38:27.964677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.080 [2024-07-23 18:38:27.964688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:28.080 [2024-07-23 18:38:27.964710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:20:28.080 [2024-07-23 18:38:27.964718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.080 [2024-07-23 18:38:27.964760] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:28.080 [2024-07-23 18:38:27.967614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.080 [2024-07-23 18:38:27.967638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:28.080 [2024-07-23 18:38:27.967648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.880 ms 00:20:28.080 [2024-07-23 18:38:27.967671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.080 [2024-07-23 18:38:27.967712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.080 [2024-07-23 18:38:27.967722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:28.080 [2024-07-23 18:38:27.967743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:28.080 [2024-07-23 18:38:27.967759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.080 [2024-07-23 18:38:27.967785] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:28.080 [2024-07-23 18:38:27.967812] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:28.081 [2024-07-23 18:38:27.967858] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:28.081 [2024-07-23 18:38:27.967878] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:20:28.081 [2024-07-23 18:38:27.967997] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:28.081 [2024-07-23 18:38:27.968022] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:28.081 [2024-07-23 18:38:27.968036] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:20:28.081 [2024-07-23 18:38:27.968056] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:28.081 [2024-07-23 18:38:27.968076] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:28.081 [2024-07-23 18:38:27.968094] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:28.081 [2024-07-23 18:38:27.968103] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:28.081 [2024-07-23 18:38:27.968112] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:28.081 [2024-07-23 18:38:27.968121] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:28.081 [2024-07-23 18:38:27.968132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.081 [2024-07-23 18:38:27.968141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:28.081 [2024-07-23 18:38:27.968150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.351 ms 00:20:28.081 [2024-07-23 18:38:27.968163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.081 [2024-07-23 18:38:27.968246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.081 [2024-07-23 18:38:27.968257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:28.081 [2024-07-23 18:38:27.968267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:20:28.081 [2024-07-23 18:38:27.968275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.081 [2024-07-23 18:38:27.968391] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:28.081 [2024-07-23 18:38:27.968408] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:28.081 [2024-07-23 18:38:27.968426] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:28.081 [2024-07-23 18:38:27.968435] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:28.081 [2024-07-23 18:38:27.968448] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:28.081 [2024-07-23 18:38:27.968456] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:28.081 [2024-07-23 18:38:27.968464] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:28.081 [2024-07-23 18:38:27.968472] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:28.081 [2024-07-23 18:38:27.968481] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:28.081 [2024-07-23 18:38:27.968489] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:28.081 [2024-07-23 18:38:27.968498] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:28.081 [2024-07-23 18:38:27.968506] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:28.081 [2024-07-23 18:38:27.968517] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:28.081 [2024-07-23 18:38:27.968526] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:28.081 [2024-07-23 18:38:27.968534] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:28.081 [2024-07-23 18:38:27.968541] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:28.081 [2024-07-23 18:38:27.968552] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:28.081 [2024-07-23 18:38:27.968560] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:28.081 [2024-07-23 18:38:27.968581] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:28.081 [2024-07-23 18:38:27.968589] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:28.081 [2024-07-23 18:38:27.968598] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:28.081 [2024-07-23 18:38:27.968605] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:28.081 [2024-07-23 18:38:27.968614] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:28.081 [2024-07-23 18:38:27.968622] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:28.081 [2024-07-23 18:38:27.968630] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:28.081 [2024-07-23 18:38:27.968637] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:28.081 [2024-07-23 18:38:27.968646] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:28.081 [2024-07-23 18:38:27.968653] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:28.081 [2024-07-23 18:38:27.968661] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:28.081 [2024-07-23 18:38:27.968669] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:28.081 [2024-07-23 18:38:27.968678] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:28.081 [2024-07-23 18:38:27.968685] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:28.081 [2024-07-23 18:38:27.968711] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:28.081 [2024-07-23 18:38:27.968718] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:28.081 [2024-07-23 18:38:27.968725] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:28.081 [2024-07-23 18:38:27.968733] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:28.081 [2024-07-23 18:38:27.968739] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:28.081 [2024-07-23 18:38:27.968746] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:28.081 [2024-07-23 18:38:27.968753] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:28.081 [2024-07-23 18:38:27.968760] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:28.081 [2024-07-23 18:38:27.968767] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:28.081 [2024-07-23 18:38:27.968774] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:28.081 [2024-07-23 18:38:27.968781] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:28.081 [2024-07-23 18:38:27.968788] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:28.081 [2024-07-23 18:38:27.968799] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:28.081 [2024-07-23 18:38:27.968807] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:28.081 [2024-07-23 18:38:27.968815] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:28.081 [2024-07-23 18:38:27.968832] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:28.081 [2024-07-23 18:38:27.968843] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:28.081 [2024-07-23 18:38:27.968851] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:28.081 [2024-07-23 18:38:27.968859] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:28.081 [2024-07-23 18:38:27.968866] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:28.081 [2024-07-23 18:38:27.968874] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:28.081 [2024-07-23 18:38:27.968884] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:28.081 [2024-07-23 18:38:27.968894] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:28.081 [2024-07-23 18:38:27.968904] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:28.081 [2024-07-23 18:38:27.968912] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:28.082 [2024-07-23 18:38:27.968919] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:28.082 [2024-07-23 18:38:27.968927] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:28.082 [2024-07-23 18:38:27.968935] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:28.082 [2024-07-23 18:38:27.968943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:28.082 [2024-07-23 18:38:27.968951] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:28.082 [2024-07-23 18:38:27.968958] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:28.082 [2024-07-23 18:38:27.968966] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:28.082 [2024-07-23 18:38:27.968977] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:28.082 [2024-07-23 18:38:27.968986] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:28.082 [2024-07-23 18:38:27.968994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:28.082 [2024-07-23 18:38:27.969002] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:28.082 [2024-07-23 18:38:27.969011] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:28.082 [2024-07-23 18:38:27.969018] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:28.082 [2024-07-23 18:38:27.969027] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:28.082 [2024-07-23 18:38:27.969035] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:28.082 [2024-07-23 18:38:27.969043] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:28.082 [2024-07-23 18:38:27.969064] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:28.082 [2024-07-23 18:38:27.969073] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:28.082 [2024-07-23 18:38:27.969082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.082 [2024-07-23 18:38:27.969092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:28.082 [2024-07-23 18:38:27.969102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.760 ms 00:20:28.082 [2024-07-23 18:38:27.969113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.082 [2024-07-23 18:38:28.002264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.082 [2024-07-23 18:38:28.002331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:28.082 [2024-07-23 18:38:28.002351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.135 ms 00:20:28.082 [2024-07-23 18:38:28.002362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.082 [2024-07-23 18:38:28.002499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.082 [2024-07-23 18:38:28.002510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:28.082 [2024-07-23 18:38:28.002521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:20:28.082 [2024-07-23 18:38:28.002536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.082 [2024-07-23 18:38:28.019600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.082 [2024-07-23 18:38:28.019656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:28.082 [2024-07-23 18:38:28.019673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.953 ms 00:20:28.082 [2024-07-23 18:38:28.019683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.082 [2024-07-23 18:38:28.019765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.082 [2024-07-23 18:38:28.019780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:28.082 [2024-07-23 18:38:28.019791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:28.082 [2024-07-23 18:38:28.019813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.082 [2024-07-23 18:38:28.020668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.082 [2024-07-23 18:38:28.020690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:28.082 [2024-07-23 18:38:28.020718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.790 ms 00:20:28.082 [2024-07-23 18:38:28.020727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.082 [2024-07-23 18:38:28.020882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.082 [2024-07-23 18:38:28.020903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:28.082 [2024-07-23 18:38:28.020913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:20:28.082 [2024-07-23 18:38:28.020923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.082 [2024-07-23 18:38:28.031649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.082 [2024-07-23 18:38:28.031698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:28.082 [2024-07-23 18:38:28.031712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.716 ms 00:20:28.082 [2024-07-23 18:38:28.031722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.082 [2024-07-23 18:38:28.035544] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:28.082 [2024-07-23 18:38:28.035607] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:28.082 [2024-07-23 18:38:28.035628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.082 [2024-07-23 18:38:28.035638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:28.082 [2024-07-23 18:38:28.035649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.749 ms 00:20:28.082 [2024-07-23 18:38:28.035658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.082 [2024-07-23 18:38:28.050882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.082 [2024-07-23 18:38:28.050933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:28.082 [2024-07-23 18:38:28.050948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.198 ms 00:20:28.082 [2024-07-23 18:38:28.050956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.082 [2024-07-23 18:38:28.054177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.082 [2024-07-23 18:38:28.054214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:28.082 [2024-07-23 18:38:28.054225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.180 ms 00:20:28.082 [2024-07-23 18:38:28.054233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.082 [2024-07-23 18:38:28.055734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.082 [2024-07-23 18:38:28.055764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:28.082 [2024-07-23 18:38:28.055790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.464 ms 00:20:28.082 [2024-07-23 18:38:28.055799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.082 [2024-07-23 18:38:28.056179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.082 [2024-07-23 18:38:28.056214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:28.082 [2024-07-23 18:38:28.056225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.309 ms 00:20:28.082 [2024-07-23 18:38:28.056234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.082 [2024-07-23 18:38:28.088091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.082 [2024-07-23 18:38:28.088178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:28.082 [2024-07-23 18:38:28.088195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.890 ms 00:20:28.082 [2024-07-23 18:38:28.088204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.083 [2024-07-23 18:38:28.096791] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:28.083 [2024-07-23 18:38:28.102636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.083 [2024-07-23 18:38:28.102679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:28.083 [2024-07-23 18:38:28.102695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.375 ms 00:20:28.083 [2024-07-23 18:38:28.102704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.083 [2024-07-23 18:38:28.102828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.083 [2024-07-23 18:38:28.102841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:28.083 [2024-07-23 18:38:28.102852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:28.083 [2024-07-23 18:38:28.102875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.083 [2024-07-23 18:38:28.102971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.083 [2024-07-23 18:38:28.102994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:28.083 [2024-07-23 18:38:28.103008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:20:28.083 [2024-07-23 18:38:28.103017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.083 [2024-07-23 18:38:28.103046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.083 [2024-07-23 18:38:28.103064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:28.083 [2024-07-23 18:38:28.103074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:28.083 [2024-07-23 18:38:28.103081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.083 [2024-07-23 18:38:28.103123] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:28.083 [2024-07-23 18:38:28.103134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.083 [2024-07-23 18:38:28.103142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:28.083 [2024-07-23 18:38:28.103168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:28.083 [2024-07-23 18:38:28.103176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.083 [2024-07-23 18:38:28.108230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.083 [2024-07-23 18:38:28.108279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:28.083 [2024-07-23 18:38:28.108308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.042 ms 00:20:28.083 [2024-07-23 18:38:28.108316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.083 [2024-07-23 18:38:28.108407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.083 [2024-07-23 18:38:28.108418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:28.083 [2024-07-23 18:38:28.108428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:20:28.083 [2024-07-23 18:38:28.108441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.083 [2024-07-23 18:38:28.110064] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 166.110 ms, result 0 00:21:05.673  Copying: 30/1024 [MB] (30 MBps) Copying: 58/1024 [MB] (28 MBps) Copying: 86/1024 [MB] (27 MBps) Copying: 114/1024 [MB] (28 MBps) Copying: 142/1024 [MB] (27 MBps) Copying: 170/1024 [MB] (27 MBps) Copying: 197/1024 [MB] (27 MBps) Copying: 225/1024 [MB] (27 MBps) Copying: 253/1024 [MB] (28 MBps) Copying: 280/1024 [MB] (27 MBps) Copying: 308/1024 [MB] (27 MBps) Copying: 336/1024 [MB] (28 MBps) Copying: 365/1024 [MB] (28 MBps) Copying: 393/1024 [MB] (28 MBps) Copying: 421/1024 [MB] (28 MBps) Copying: 449/1024 [MB] (28 MBps) Copying: 477/1024 [MB] (27 MBps) Copying: 505/1024 [MB] (27 MBps) Copying: 533/1024 [MB] (28 MBps) Copying: 562/1024 [MB] (28 MBps) Copying: 590/1024 [MB] (27 MBps) Copying: 616/1024 [MB] (26 MBps) Copying: 643/1024 [MB] (26 MBps) Copying: 671/1024 [MB] (27 MBps) Copying: 698/1024 [MB] (27 MBps) Copying: 726/1024 [MB] (27 MBps) Copying: 753/1024 [MB] (26 MBps) Copying: 780/1024 [MB] (27 MBps) Copying: 808/1024 [MB] (27 MBps) Copying: 836/1024 [MB] (28 MBps) Copying: 864/1024 [MB] (27 MBps) Copying: 892/1024 [MB] (27 MBps) Copying: 919/1024 [MB] (27 MBps) Copying: 947/1024 [MB] (27 MBps) Copying: 976/1024 [MB] (28 MBps) Copying: 1003/1024 [MB] (27 MBps) Copying: 1023/1024 [MB] (19 MBps) Copying: 1024/1024 [MB] (average 27 MBps)[2024-07-23 18:39:05.522738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.673 [2024-07-23 18:39:05.522831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:05.673 [2024-07-23 18:39:05.522849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:05.673 [2024-07-23 18:39:05.522859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.673 [2024-07-23 18:39:05.525501] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:05.673 [2024-07-23 18:39:05.528152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.673 [2024-07-23 18:39:05.528250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:05.673 [2024-07-23 18:39:05.528294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.543 ms 00:21:05.673 [2024-07-23 18:39:05.528320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.673 [2024-07-23 18:39:05.538192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.673 [2024-07-23 18:39:05.538273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:05.673 [2024-07-23 18:39:05.538327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.276 ms 00:21:05.673 [2024-07-23 18:39:05.538353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.673 [2024-07-23 18:39:05.560169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.673 [2024-07-23 18:39:05.560268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:05.673 [2024-07-23 18:39:05.560304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.813 ms 00:21:05.673 [2024-07-23 18:39:05.560336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.673 [2024-07-23 18:39:05.565455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.673 [2024-07-23 18:39:05.565544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:05.673 [2024-07-23 18:39:05.565577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.077 ms 00:21:05.673 [2024-07-23 18:39:05.565630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.673 [2024-07-23 18:39:05.567245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.673 [2024-07-23 18:39:05.567321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:05.673 [2024-07-23 18:39:05.567353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.542 ms 00:21:05.673 [2024-07-23 18:39:05.567384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.673 [2024-07-23 18:39:05.572162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.673 [2024-07-23 18:39:05.572240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:05.673 [2024-07-23 18:39:05.572316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.741 ms 00:21:05.673 [2024-07-23 18:39:05.572358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.673 [2024-07-23 18:39:05.681863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.673 [2024-07-23 18:39:05.681982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:05.673 [2024-07-23 18:39:05.682026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 109.640 ms 00:21:05.673 [2024-07-23 18:39:05.682061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.673 [2024-07-23 18:39:05.684681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.673 [2024-07-23 18:39:05.684769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:21:05.673 [2024-07-23 18:39:05.684817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.577 ms 00:21:05.673 [2024-07-23 18:39:05.684841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.673 [2024-07-23 18:39:05.686511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.673 [2024-07-23 18:39:05.686600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:21:05.673 [2024-07-23 18:39:05.686667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.623 ms 00:21:05.673 [2024-07-23 18:39:05.686691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.673 [2024-07-23 18:39:05.687959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.673 [2024-07-23 18:39:05.688039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:05.673 [2024-07-23 18:39:05.688074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.203 ms 00:21:05.673 [2024-07-23 18:39:05.688110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.673 [2024-07-23 18:39:05.689310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.673 [2024-07-23 18:39:05.689396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:05.673 [2024-07-23 18:39:05.689431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.099 ms 00:21:05.673 [2024-07-23 18:39:05.689455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.673 [2024-07-23 18:39:05.689508] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:05.673 [2024-07-23 18:39:05.689542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 113152 / 261120 wr_cnt: 1 state: open 00:21:05.673 [2024-07-23 18:39:05.689631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:05.673 [2024-07-23 18:39:05.689686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:05.673 [2024-07-23 18:39:05.689732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:05.673 [2024-07-23 18:39:05.689796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.689847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.689894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.689947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.689993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.690040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.690093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.690140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.690200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.690249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.690295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.690339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.690373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.690384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.690393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.690402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.690411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.690420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.690430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.690440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.690449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.690458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.690467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.690476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.690485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.690493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.690503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.690512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.690521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.690531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.690540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.690549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.690559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.690576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.690586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.690595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.690605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.690615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.690624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.690634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.690643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.690653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.690663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.690672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.690681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.690690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.690699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.690709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.690718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.690727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.690736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.690765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.690775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.690784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.690793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.690802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.690811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.690820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.690829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.690839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.690849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.690858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.690868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.690878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.690887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.690896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.690905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.690914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.690925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.690934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.690943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.690953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.690962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.690971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.690981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.690990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.690999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.691008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.691018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.691027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.691036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.691044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.691054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.691063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.691072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.691081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.691095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.691104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.691113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.691122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.691132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.691146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.691156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.691166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.691175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.691185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:05.674 [2024-07-23 18:39:05.691201] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:05.674 [2024-07-23 18:39:05.691211] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e8384e09-63b5-481c-89d9-86ebd1f98094 00:21:05.674 [2024-07-23 18:39:05.691221] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 113152 00:21:05.674 [2024-07-23 18:39:05.691235] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 114112 00:21:05.674 [2024-07-23 18:39:05.691244] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 113152 00:21:05.674 [2024-07-23 18:39:05.691253] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0085 00:21:05.674 [2024-07-23 18:39:05.691262] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:05.674 [2024-07-23 18:39:05.691271] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:05.674 [2024-07-23 18:39:05.691290] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:05.674 [2024-07-23 18:39:05.691298] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:05.674 [2024-07-23 18:39:05.691306] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:05.674 [2024-07-23 18:39:05.691316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.674 [2024-07-23 18:39:05.691326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:05.674 [2024-07-23 18:39:05.691335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.813 ms 00:21:05.674 [2024-07-23 18:39:05.691344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.674 [2024-07-23 18:39:05.693173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.674 [2024-07-23 18:39:05.693206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:05.674 [2024-07-23 18:39:05.693218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.801 ms 00:21:05.674 [2024-07-23 18:39:05.693227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.674 [2024-07-23 18:39:05.693334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.674 [2024-07-23 18:39:05.693352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:05.674 [2024-07-23 18:39:05.693363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:21:05.674 [2024-07-23 18:39:05.693387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.674 [2024-07-23 18:39:05.698908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:05.674 [2024-07-23 18:39:05.698937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:05.674 [2024-07-23 18:39:05.698948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:05.674 [2024-07-23 18:39:05.698957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.674 [2024-07-23 18:39:05.699008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:05.674 [2024-07-23 18:39:05.699018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:05.674 [2024-07-23 18:39:05.699039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:05.674 [2024-07-23 18:39:05.699051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.674 [2024-07-23 18:39:05.699115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:05.674 [2024-07-23 18:39:05.699128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:05.674 [2024-07-23 18:39:05.699137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:05.674 [2024-07-23 18:39:05.699145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.674 [2024-07-23 18:39:05.699163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:05.674 [2024-07-23 18:39:05.699172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:05.674 [2024-07-23 18:39:05.699181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:05.674 [2024-07-23 18:39:05.699198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.674 [2024-07-23 18:39:05.711914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:05.674 [2024-07-23 18:39:05.711972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:05.674 [2024-07-23 18:39:05.711996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:05.674 [2024-07-23 18:39:05.712006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.674 [2024-07-23 18:39:05.720120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:05.674 [2024-07-23 18:39:05.720173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:05.674 [2024-07-23 18:39:05.720185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:05.674 [2024-07-23 18:39:05.720210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.674 [2024-07-23 18:39:05.720264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:05.674 [2024-07-23 18:39:05.720274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:05.674 [2024-07-23 18:39:05.720283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:05.674 [2024-07-23 18:39:05.720291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.674 [2024-07-23 18:39:05.720316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:05.674 [2024-07-23 18:39:05.720326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:05.674 [2024-07-23 18:39:05.720336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:05.674 [2024-07-23 18:39:05.720354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.674 [2024-07-23 18:39:05.720432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:05.674 [2024-07-23 18:39:05.720443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:05.674 [2024-07-23 18:39:05.720453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:05.674 [2024-07-23 18:39:05.720461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.674 [2024-07-23 18:39:05.720502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:05.674 [2024-07-23 18:39:05.720515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:05.674 [2024-07-23 18:39:05.720524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:05.674 [2024-07-23 18:39:05.720532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.674 [2024-07-23 18:39:05.720572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:05.674 [2024-07-23 18:39:05.720596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:05.674 [2024-07-23 18:39:05.720606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:05.674 [2024-07-23 18:39:05.720615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.674 [2024-07-23 18:39:05.720662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:05.674 [2024-07-23 18:39:05.720672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:05.674 [2024-07-23 18:39:05.720706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:05.674 [2024-07-23 18:39:05.720724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.675 [2024-07-23 18:39:05.720860] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 200.114 ms, result 0 00:21:06.611 00:21:06.611 00:21:06.611 18:39:06 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:21:06.870 [2024-07-23 18:39:06.673684] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:21:06.870 [2024-07-23 18:39:06.673846] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91617 ] 00:21:06.871 [2024-07-23 18:39:06.821130] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:06.871 [2024-07-23 18:39:06.867684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:07.131 [2024-07-23 18:39:06.970431] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:07.131 [2024-07-23 18:39:06.970510] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:07.131 [2024-07-23 18:39:07.118037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.131 [2024-07-23 18:39:07.118092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:07.131 [2024-07-23 18:39:07.118106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:07.131 [2024-07-23 18:39:07.118139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.131 [2024-07-23 18:39:07.118216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.131 [2024-07-23 18:39:07.118228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:07.131 [2024-07-23 18:39:07.118237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:21:07.131 [2024-07-23 18:39:07.118256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.131 [2024-07-23 18:39:07.118278] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:07.131 [2024-07-23 18:39:07.118505] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:07.131 [2024-07-23 18:39:07.118531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.131 [2024-07-23 18:39:07.118554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:07.131 [2024-07-23 18:39:07.118564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.258 ms 00:21:07.131 [2024-07-23 18:39:07.118586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.131 [2024-07-23 18:39:07.120055] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:07.131 [2024-07-23 18:39:07.122565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.131 [2024-07-23 18:39:07.122622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:07.131 [2024-07-23 18:39:07.122639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.516 ms 00:21:07.131 [2024-07-23 18:39:07.122649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.131 [2024-07-23 18:39:07.122714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.131 [2024-07-23 18:39:07.122725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:07.131 [2024-07-23 18:39:07.122737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:21:07.131 [2024-07-23 18:39:07.122746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.131 [2024-07-23 18:39:07.129593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.131 [2024-07-23 18:39:07.129637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:07.131 [2024-07-23 18:39:07.129648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.821 ms 00:21:07.131 [2024-07-23 18:39:07.129656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.131 [2024-07-23 18:39:07.129760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.131 [2024-07-23 18:39:07.129775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:07.131 [2024-07-23 18:39:07.129784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:21:07.131 [2024-07-23 18:39:07.129792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.131 [2024-07-23 18:39:07.129852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.131 [2024-07-23 18:39:07.129863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:07.131 [2024-07-23 18:39:07.129880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:07.131 [2024-07-23 18:39:07.129888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.131 [2024-07-23 18:39:07.129915] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:07.131 [2024-07-23 18:39:07.131617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.131 [2024-07-23 18:39:07.131646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:07.131 [2024-07-23 18:39:07.131670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.713 ms 00:21:07.131 [2024-07-23 18:39:07.131678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.131 [2024-07-23 18:39:07.131712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.131 [2024-07-23 18:39:07.131722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:07.131 [2024-07-23 18:39:07.131734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:07.131 [2024-07-23 18:39:07.131743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.131 [2024-07-23 18:39:07.131772] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:07.131 [2024-07-23 18:39:07.131795] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:07.131 [2024-07-23 18:39:07.131841] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:07.131 [2024-07-23 18:39:07.131884] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:21:07.131 [2024-07-23 18:39:07.131969] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:07.131 [2024-07-23 18:39:07.131991] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:07.131 [2024-07-23 18:39:07.132004] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:21:07.131 [2024-07-23 18:39:07.132015] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:07.131 [2024-07-23 18:39:07.132026] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:07.131 [2024-07-23 18:39:07.132045] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:07.131 [2024-07-23 18:39:07.132054] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:07.131 [2024-07-23 18:39:07.132064] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:07.131 [2024-07-23 18:39:07.132072] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:07.131 [2024-07-23 18:39:07.132081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.131 [2024-07-23 18:39:07.132090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:07.131 [2024-07-23 18:39:07.132099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.314 ms 00:21:07.131 [2024-07-23 18:39:07.132112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.131 [2024-07-23 18:39:07.132187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.131 [2024-07-23 18:39:07.132197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:07.131 [2024-07-23 18:39:07.132206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:21:07.131 [2024-07-23 18:39:07.132215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.132 [2024-07-23 18:39:07.132303] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:07.132 [2024-07-23 18:39:07.132320] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:07.132 [2024-07-23 18:39:07.132330] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:07.132 [2024-07-23 18:39:07.132339] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:07.132 [2024-07-23 18:39:07.132352] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:07.132 [2024-07-23 18:39:07.132360] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:07.132 [2024-07-23 18:39:07.132369] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:07.132 [2024-07-23 18:39:07.132377] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:07.132 [2024-07-23 18:39:07.132385] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:07.132 [2024-07-23 18:39:07.132393] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:07.132 [2024-07-23 18:39:07.132401] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:07.132 [2024-07-23 18:39:07.132413] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:07.132 [2024-07-23 18:39:07.132423] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:07.132 [2024-07-23 18:39:07.132432] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:07.132 [2024-07-23 18:39:07.132441] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:07.132 [2024-07-23 18:39:07.132449] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:07.132 [2024-07-23 18:39:07.132457] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:07.132 [2024-07-23 18:39:07.132465] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:07.132 [2024-07-23 18:39:07.132473] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:07.132 [2024-07-23 18:39:07.132482] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:07.132 [2024-07-23 18:39:07.132490] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:07.132 [2024-07-23 18:39:07.132498] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:07.132 [2024-07-23 18:39:07.132506] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:07.132 [2024-07-23 18:39:07.132515] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:07.132 [2024-07-23 18:39:07.132523] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:07.132 [2024-07-23 18:39:07.132531] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:07.132 [2024-07-23 18:39:07.132539] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:07.132 [2024-07-23 18:39:07.132550] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:07.132 [2024-07-23 18:39:07.132559] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:07.132 [2024-07-23 18:39:07.132579] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:07.132 [2024-07-23 18:39:07.132589] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:07.132 [2024-07-23 18:39:07.132597] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:07.132 [2024-07-23 18:39:07.132606] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:07.132 [2024-07-23 18:39:07.132614] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:07.132 [2024-07-23 18:39:07.132622] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:07.132 [2024-07-23 18:39:07.132631] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:07.132 [2024-07-23 18:39:07.132639] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:07.132 [2024-07-23 18:39:07.132647] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:07.132 [2024-07-23 18:39:07.132655] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:07.132 [2024-07-23 18:39:07.132663] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:07.132 [2024-07-23 18:39:07.132671] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:07.132 [2024-07-23 18:39:07.132680] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:07.132 [2024-07-23 18:39:07.132688] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:07.132 [2024-07-23 18:39:07.132698] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:07.132 [2024-07-23 18:39:07.132709] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:07.132 [2024-07-23 18:39:07.132728] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:07.132 [2024-07-23 18:39:07.132736] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:07.132 [2024-07-23 18:39:07.132745] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:07.132 [2024-07-23 18:39:07.132754] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:07.132 [2024-07-23 18:39:07.132762] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:07.132 [2024-07-23 18:39:07.132771] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:07.132 [2024-07-23 18:39:07.132779] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:07.132 [2024-07-23 18:39:07.132798] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:07.132 [2024-07-23 18:39:07.132807] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:07.132 [2024-07-23 18:39:07.132818] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:07.132 [2024-07-23 18:39:07.132827] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:07.132 [2024-07-23 18:39:07.132835] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:07.132 [2024-07-23 18:39:07.132844] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:07.132 [2024-07-23 18:39:07.132852] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:07.132 [2024-07-23 18:39:07.132864] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:07.132 [2024-07-23 18:39:07.132873] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:07.132 [2024-07-23 18:39:07.132882] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:07.132 [2024-07-23 18:39:07.132890] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:07.132 [2024-07-23 18:39:07.132898] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:07.132 [2024-07-23 18:39:07.132906] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:07.132 [2024-07-23 18:39:07.132914] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:07.132 [2024-07-23 18:39:07.132922] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:07.132 [2024-07-23 18:39:07.132930] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:07.132 [2024-07-23 18:39:07.132938] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:07.132 [2024-07-23 18:39:07.132946] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:07.132 [2024-07-23 18:39:07.132954] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:07.132 [2024-07-23 18:39:07.132963] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:07.132 [2024-07-23 18:39:07.132971] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:07.132 [2024-07-23 18:39:07.132989] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:07.132 [2024-07-23 18:39:07.132998] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:07.132 [2024-07-23 18:39:07.133010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.132 [2024-07-23 18:39:07.133020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:07.132 [2024-07-23 18:39:07.133029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.758 ms 00:21:07.132 [2024-07-23 18:39:07.133048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.132 [2024-07-23 18:39:07.155765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.132 [2024-07-23 18:39:07.155890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:07.132 [2024-07-23 18:39:07.155953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.701 ms 00:21:07.132 [2024-07-23 18:39:07.155992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.132 [2024-07-23 18:39:07.156347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.132 [2024-07-23 18:39:07.156408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:07.132 [2024-07-23 18:39:07.156448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.216 ms 00:21:07.132 [2024-07-23 18:39:07.156497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.132 [2024-07-23 18:39:07.173231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.132 [2024-07-23 18:39:07.173305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:07.132 [2024-07-23 18:39:07.173333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.540 ms 00:21:07.132 [2024-07-23 18:39:07.173353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.132 [2024-07-23 18:39:07.173425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.132 [2024-07-23 18:39:07.173472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:07.132 [2024-07-23 18:39:07.173504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:07.132 [2024-07-23 18:39:07.173524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.132 [2024-07-23 18:39:07.174205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.132 [2024-07-23 18:39:07.174249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:07.133 [2024-07-23 18:39:07.174273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.539 ms 00:21:07.133 [2024-07-23 18:39:07.174292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.133 [2024-07-23 18:39:07.174530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.133 [2024-07-23 18:39:07.174607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:07.133 [2024-07-23 18:39:07.174661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.188 ms 00:21:07.133 [2024-07-23 18:39:07.174689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.392 [2024-07-23 18:39:07.182516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.393 [2024-07-23 18:39:07.182583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:07.393 [2024-07-23 18:39:07.182602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.791 ms 00:21:07.393 [2024-07-23 18:39:07.182616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.393 [2024-07-23 18:39:07.185800] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:21:07.393 [2024-07-23 18:39:07.185862] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:07.393 [2024-07-23 18:39:07.185884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.393 [2024-07-23 18:39:07.185899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:07.393 [2024-07-23 18:39:07.185913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.134 ms 00:21:07.393 [2024-07-23 18:39:07.185927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.393 [2024-07-23 18:39:07.200726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.393 [2024-07-23 18:39:07.200779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:07.393 [2024-07-23 18:39:07.200813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.770 ms 00:21:07.393 [2024-07-23 18:39:07.200822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.393 [2024-07-23 18:39:07.202611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.393 [2024-07-23 18:39:07.202646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:07.393 [2024-07-23 18:39:07.202656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.753 ms 00:21:07.393 [2024-07-23 18:39:07.202664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.393 [2024-07-23 18:39:07.204183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.393 [2024-07-23 18:39:07.204223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:07.393 [2024-07-23 18:39:07.204234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.486 ms 00:21:07.393 [2024-07-23 18:39:07.204242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.393 [2024-07-23 18:39:07.204537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.393 [2024-07-23 18:39:07.204562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:07.393 [2024-07-23 18:39:07.204586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.227 ms 00:21:07.393 [2024-07-23 18:39:07.204600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.393 [2024-07-23 18:39:07.227146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.393 [2024-07-23 18:39:07.227223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:07.393 [2024-07-23 18:39:07.227241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.566 ms 00:21:07.393 [2024-07-23 18:39:07.227252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.393 [2024-07-23 18:39:07.233728] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:07.393 [2024-07-23 18:39:07.236918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.393 [2024-07-23 18:39:07.236951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:07.393 [2024-07-23 18:39:07.236989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.630 ms 00:21:07.393 [2024-07-23 18:39:07.236998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.393 [2024-07-23 18:39:07.237083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.393 [2024-07-23 18:39:07.237095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:07.393 [2024-07-23 18:39:07.237105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:07.393 [2024-07-23 18:39:07.237114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.393 [2024-07-23 18:39:07.238816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.393 [2024-07-23 18:39:07.238862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:07.393 [2024-07-23 18:39:07.238874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.673 ms 00:21:07.393 [2024-07-23 18:39:07.238894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.393 [2024-07-23 18:39:07.238927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.393 [2024-07-23 18:39:07.238937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:07.393 [2024-07-23 18:39:07.238955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:07.393 [2024-07-23 18:39:07.238967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.393 [2024-07-23 18:39:07.239026] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:07.393 [2024-07-23 18:39:07.239044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.393 [2024-07-23 18:39:07.239058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:07.393 [2024-07-23 18:39:07.239067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:21:07.393 [2024-07-23 18:39:07.239104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.393 [2024-07-23 18:39:07.242920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.393 [2024-07-23 18:39:07.242955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:07.393 [2024-07-23 18:39:07.242967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.795 ms 00:21:07.393 [2024-07-23 18:39:07.242993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.393 [2024-07-23 18:39:07.243078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.393 [2024-07-23 18:39:07.243096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:07.393 [2024-07-23 18:39:07.243110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:21:07.393 [2024-07-23 18:39:07.243118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.393 [2024-07-23 18:39:07.248067] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 129.199 ms, result 0 00:21:41.753  Copying: 28/1024 [MB] (28 MBps) Copying: 59/1024 [MB] (30 MBps) Copying: 89/1024 [MB] (30 MBps) Copying: 119/1024 [MB] (30 MBps) Copying: 149/1024 [MB] (30 MBps) Copying: 180/1024 [MB] (30 MBps) Copying: 210/1024 [MB] (30 MBps) Copying: 241/1024 [MB] (30 MBps) Copying: 271/1024 [MB] (30 MBps) Copying: 301/1024 [MB] (29 MBps) Copying: 331/1024 [MB] (29 MBps) Copying: 360/1024 [MB] (29 MBps) Copying: 390/1024 [MB] (29 MBps) Copying: 420/1024 [MB] (30 MBps) Copying: 451/1024 [MB] (30 MBps) Copying: 481/1024 [MB] (30 MBps) Copying: 511/1024 [MB] (30 MBps) Copying: 542/1024 [MB] (30 MBps) Copying: 572/1024 [MB] (30 MBps) Copying: 602/1024 [MB] (29 MBps) Copying: 633/1024 [MB] (30 MBps) Copying: 663/1024 [MB] (30 MBps) Copying: 694/1024 [MB] (31 MBps) Copying: 725/1024 [MB] (31 MBps) Copying: 757/1024 [MB] (31 MBps) Copying: 788/1024 [MB] (31 MBps) Copying: 818/1024 [MB] (30 MBps) Copying: 849/1024 [MB] (30 MBps) Copying: 879/1024 [MB] (30 MBps) Copying: 910/1024 [MB] (30 MBps) Copying: 940/1024 [MB] (29 MBps) Copying: 970/1024 [MB] (30 MBps) Copying: 1000/1024 [MB] (30 MBps) Copying: 1024/1024 [MB] (average 30 MBps)[2024-07-23 18:39:41.695295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.753 [2024-07-23 18:39:41.695485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:41.753 [2024-07-23 18:39:41.695541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:41.753 [2024-07-23 18:39:41.695624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.753 [2024-07-23 18:39:41.695754] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:41.753 [2024-07-23 18:39:41.697562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.753 [2024-07-23 18:39:41.697665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:41.753 [2024-07-23 18:39:41.697703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.726 ms 00:21:41.753 [2024-07-23 18:39:41.697737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.753 [2024-07-23 18:39:41.698257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.753 [2024-07-23 18:39:41.698301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:41.753 [2024-07-23 18:39:41.698355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.448 ms 00:21:41.753 [2024-07-23 18:39:41.698398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.753 [2024-07-23 18:39:41.708171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.753 [2024-07-23 18:39:41.708237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:41.753 [2024-07-23 18:39:41.708258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.748 ms 00:21:41.753 [2024-07-23 18:39:41.708282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.753 [2024-07-23 18:39:41.717658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.753 [2024-07-23 18:39:41.717763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:41.753 [2024-07-23 18:39:41.717789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.308 ms 00:21:41.753 [2024-07-23 18:39:41.717804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.753 [2024-07-23 18:39:41.719325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.753 [2024-07-23 18:39:41.719373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:41.753 [2024-07-23 18:39:41.719386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.422 ms 00:21:41.753 [2024-07-23 18:39:41.719407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.753 [2024-07-23 18:39:41.723117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.753 [2024-07-23 18:39:41.723160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:41.753 [2024-07-23 18:39:41.723181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.676 ms 00:21:41.753 [2024-07-23 18:39:41.723191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.014 [2024-07-23 18:39:41.870008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.014 [2024-07-23 18:39:41.870076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:42.014 [2024-07-23 18:39:41.870094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 147.051 ms 00:21:42.014 [2024-07-23 18:39:41.870105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.014 [2024-07-23 18:39:41.872784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.014 [2024-07-23 18:39:41.872826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:21:42.014 [2024-07-23 18:39:41.872837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.662 ms 00:21:42.014 [2024-07-23 18:39:41.872848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.014 [2024-07-23 18:39:41.874345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.014 [2024-07-23 18:39:41.874389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:21:42.014 [2024-07-23 18:39:41.874400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.457 ms 00:21:42.014 [2024-07-23 18:39:41.874409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.014 [2024-07-23 18:39:41.875808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.014 [2024-07-23 18:39:41.875857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:42.014 [2024-07-23 18:39:41.875875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.355 ms 00:21:42.014 [2024-07-23 18:39:41.875885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.014 [2024-07-23 18:39:41.877075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.014 [2024-07-23 18:39:41.877116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:42.014 [2024-07-23 18:39:41.877128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.127 ms 00:21:42.014 [2024-07-23 18:39:41.877137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.014 [2024-07-23 18:39:41.877167] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:42.014 [2024-07-23 18:39:41.877199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 133632 / 261120 wr_cnt: 1 state: open 00:21:42.014 [2024-07-23 18:39:41.877211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:42.014 [2024-07-23 18:39:41.877221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:42.014 [2024-07-23 18:39:41.877231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:42.014 [2024-07-23 18:39:41.877240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:42.014 [2024-07-23 18:39:41.877254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:42.014 [2024-07-23 18:39:41.877268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:42.014 [2024-07-23 18:39:41.877284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:42.014 [2024-07-23 18:39:41.877300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:42.014 [2024-07-23 18:39:41.877310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:42.014 [2024-07-23 18:39:41.877319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:42.014 [2024-07-23 18:39:41.877329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:42.014 [2024-07-23 18:39:41.877338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:42.014 [2024-07-23 18:39:41.877347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:42.014 [2024-07-23 18:39:41.877356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:42.014 [2024-07-23 18:39:41.877365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:42.014 [2024-07-23 18:39:41.877374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:42.014 [2024-07-23 18:39:41.877383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:42.014 [2024-07-23 18:39:41.877392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:42.014 [2024-07-23 18:39:41.877401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:42.014 [2024-07-23 18:39:41.877410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:42.014 [2024-07-23 18:39:41.877419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:42.014 [2024-07-23 18:39:41.877428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:42.014 [2024-07-23 18:39:41.877438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:42.014 [2024-07-23 18:39:41.877447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:42.014 [2024-07-23 18:39:41.877456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:42.014 [2024-07-23 18:39:41.877475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:42.014 [2024-07-23 18:39:41.877484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:42.014 [2024-07-23 18:39:41.877493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:42.014 [2024-07-23 18:39:41.877503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:42.014 [2024-07-23 18:39:41.877512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:42.015 [2024-07-23 18:39:41.877521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:42.015 [2024-07-23 18:39:41.877531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:42.015 [2024-07-23 18:39:41.877540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:42.015 [2024-07-23 18:39:41.877550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:42.015 [2024-07-23 18:39:41.877559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:42.015 [2024-07-23 18:39:41.877579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:42.015 [2024-07-23 18:39:41.877589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:42.015 [2024-07-23 18:39:41.877599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:42.015 [2024-07-23 18:39:41.877608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:42.015 [2024-07-23 18:39:41.877617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:42.015 [2024-07-23 18:39:41.877627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:42.015 [2024-07-23 18:39:41.877636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:42.015 [2024-07-23 18:39:41.877645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:42.015 [2024-07-23 18:39:41.877654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:42.015 [2024-07-23 18:39:41.877664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:42.015 [2024-07-23 18:39:41.877673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:42.015 [2024-07-23 18:39:41.877682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:42.015 [2024-07-23 18:39:41.877691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:42.015 [2024-07-23 18:39:41.877703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:42.015 [2024-07-23 18:39:41.877717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:42.015 [2024-07-23 18:39:41.877732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:42.015 [2024-07-23 18:39:41.877746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:42.015 [2024-07-23 18:39:41.877759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:42.015 [2024-07-23 18:39:41.877769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:42.015 [2024-07-23 18:39:41.877780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:42.015 [2024-07-23 18:39:41.877809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:42.015 [2024-07-23 18:39:41.877819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:42.015 [2024-07-23 18:39:41.877832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:42.015 [2024-07-23 18:39:41.877847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:42.015 [2024-07-23 18:39:41.877862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:42.015 [2024-07-23 18:39:41.877874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:42.015 [2024-07-23 18:39:41.877884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:42.015 [2024-07-23 18:39:41.877893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:42.015 [2024-07-23 18:39:41.877916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:42.015 [2024-07-23 18:39:41.877925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:42.015 [2024-07-23 18:39:41.877934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:42.015 [2024-07-23 18:39:41.877942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:42.015 [2024-07-23 18:39:41.877951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:42.015 [2024-07-23 18:39:41.877960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:42.015 [2024-07-23 18:39:41.877969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:42.015 [2024-07-23 18:39:41.877977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:42.015 [2024-07-23 18:39:41.877985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:42.015 [2024-07-23 18:39:41.877994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:42.015 [2024-07-23 18:39:41.878002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:42.015 [2024-07-23 18:39:41.878011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:42.015 [2024-07-23 18:39:41.878019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:42.015 [2024-07-23 18:39:41.878028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:42.015 [2024-07-23 18:39:41.878036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:42.015 [2024-07-23 18:39:41.878044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:42.015 [2024-07-23 18:39:41.878053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:42.015 [2024-07-23 18:39:41.878062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:42.015 [2024-07-23 18:39:41.878070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:42.015 [2024-07-23 18:39:41.878079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:42.015 [2024-07-23 18:39:41.878089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:42.015 [2024-07-23 18:39:41.878098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:42.015 [2024-07-23 18:39:41.878107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:42.015 [2024-07-23 18:39:41.878115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:42.015 [2024-07-23 18:39:41.878125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:42.015 [2024-07-23 18:39:41.878138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:42.015 [2024-07-23 18:39:41.878151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:42.015 [2024-07-23 18:39:41.878164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:42.015 [2024-07-23 18:39:41.878173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:42.015 [2024-07-23 18:39:41.878182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:42.015 [2024-07-23 18:39:41.878191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:42.015 [2024-07-23 18:39:41.878200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:42.015 [2024-07-23 18:39:41.878210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:42.015 [2024-07-23 18:39:41.878219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:42.015 [2024-07-23 18:39:41.878231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:42.015 [2024-07-23 18:39:41.878244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:42.015 [2024-07-23 18:39:41.878269] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:42.015 [2024-07-23 18:39:41.878279] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e8384e09-63b5-481c-89d9-86ebd1f98094 00:21:42.015 [2024-07-23 18:39:41.878294] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 133632 00:21:42.015 [2024-07-23 18:39:41.878302] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 21440 00:21:42.015 [2024-07-23 18:39:41.878310] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 20480 00:21:42.015 [2024-07-23 18:39:41.878319] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0469 00:21:42.015 [2024-07-23 18:39:41.878327] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:42.015 [2024-07-23 18:39:41.878345] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:42.015 [2024-07-23 18:39:41.878354] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:42.015 [2024-07-23 18:39:41.878367] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:42.015 [2024-07-23 18:39:41.878379] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:42.015 [2024-07-23 18:39:41.878392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.015 [2024-07-23 18:39:41.878406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:42.015 [2024-07-23 18:39:41.878422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.229 ms 00:21:42.015 [2024-07-23 18:39:41.878431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.015 [2024-07-23 18:39:41.880303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.015 [2024-07-23 18:39:41.880332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:42.015 [2024-07-23 18:39:41.880344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.855 ms 00:21:42.015 [2024-07-23 18:39:41.880355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.016 [2024-07-23 18:39:41.880475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.016 [2024-07-23 18:39:41.880486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:42.016 [2024-07-23 18:39:41.880504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:21:42.016 [2024-07-23 18:39:41.880517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.016 [2024-07-23 18:39:41.886387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.016 [2024-07-23 18:39:41.886419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:42.016 [2024-07-23 18:39:41.886432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.016 [2024-07-23 18:39:41.886445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.016 [2024-07-23 18:39:41.886501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.016 [2024-07-23 18:39:41.886513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:42.016 [2024-07-23 18:39:41.886522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.016 [2024-07-23 18:39:41.886536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.016 [2024-07-23 18:39:41.886607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.016 [2024-07-23 18:39:41.886621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:42.016 [2024-07-23 18:39:41.886632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.016 [2024-07-23 18:39:41.886640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.016 [2024-07-23 18:39:41.886659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.016 [2024-07-23 18:39:41.886690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:42.016 [2024-07-23 18:39:41.886704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.016 [2024-07-23 18:39:41.886716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.016 [2024-07-23 18:39:41.900175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.016 [2024-07-23 18:39:41.900248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:42.016 [2024-07-23 18:39:41.900264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.016 [2024-07-23 18:39:41.900274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.016 [2024-07-23 18:39:41.908994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.016 [2024-07-23 18:39:41.909053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:42.016 [2024-07-23 18:39:41.909067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.016 [2024-07-23 18:39:41.909101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.016 [2024-07-23 18:39:41.909175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.016 [2024-07-23 18:39:41.909190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:42.016 [2024-07-23 18:39:41.909200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.016 [2024-07-23 18:39:41.909208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.016 [2024-07-23 18:39:41.909249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.016 [2024-07-23 18:39:41.909267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:42.016 [2024-07-23 18:39:41.909281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.016 [2024-07-23 18:39:41.909295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.016 [2024-07-23 18:39:41.909382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.016 [2024-07-23 18:39:41.909395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:42.016 [2024-07-23 18:39:41.909404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.016 [2024-07-23 18:39:41.909412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.016 [2024-07-23 18:39:41.909450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.016 [2024-07-23 18:39:41.909461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:42.016 [2024-07-23 18:39:41.909470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.016 [2024-07-23 18:39:41.909478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.016 [2024-07-23 18:39:41.909523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.016 [2024-07-23 18:39:41.909533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:42.016 [2024-07-23 18:39:41.909542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.016 [2024-07-23 18:39:41.909550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.016 [2024-07-23 18:39:41.909614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.016 [2024-07-23 18:39:41.909625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:42.016 [2024-07-23 18:39:41.909634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.016 [2024-07-23 18:39:41.909643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.016 [2024-07-23 18:39:41.909779] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 215.698 ms, result 0 00:21:42.275 00:21:42.275 00:21:42.275 18:39:42 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:44.178 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:21:44.178 18:39:43 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:21:44.178 18:39:43 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:21:44.178 18:39:43 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:21:44.178 18:39:44 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:44.178 18:39:44 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:44.178 18:39:44 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 90248 00:21:44.178 18:39:44 ftl.ftl_restore -- common/autotest_common.sh@946 -- # '[' -z 90248 ']' 00:21:44.178 18:39:44 ftl.ftl_restore -- common/autotest_common.sh@950 -- # kill -0 90248 00:21:44.178 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (90248) - No such process 00:21:44.178 18:39:44 ftl.ftl_restore -- common/autotest_common.sh@973 -- # echo 'Process with pid 90248 is not found' 00:21:44.178 Process with pid 90248 is not found 00:21:44.179 18:39:44 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:21:44.179 18:39:44 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:21:44.179 Remove shared memory files 00:21:44.179 18:39:44 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:21:44.179 18:39:44 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:21:44.179 18:39:44 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:21:44.179 18:39:44 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:21:44.179 18:39:44 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:21:44.179 ************************************ 00:21:44.179 END TEST ftl_restore 00:21:44.179 ************************************ 00:21:44.179 00:21:44.179 real 2m45.046s 00:21:44.179 user 2m32.926s 00:21:44.179 sys 0m13.066s 00:21:44.179 18:39:44 ftl.ftl_restore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:44.179 18:39:44 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:21:44.179 18:39:44 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:21:44.179 18:39:44 ftl -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:21:44.179 18:39:44 ftl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:44.179 18:39:44 ftl -- common/autotest_common.sh@10 -- # set +x 00:21:44.179 ************************************ 00:21:44.179 START TEST ftl_dirty_shutdown 00:21:44.179 ************************************ 00:21:44.179 18:39:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:21:44.179 * Looking for test storage... 00:21:44.438 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:21:44.438 18:39:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:21:44.438 18:39:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:21:44.438 18:39:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:21:44.438 18:39:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:21:44.438 18:39:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:21:44.438 18:39:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:44.438 18:39:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:44.438 18:39:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:21:44.438 18:39:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:21:44.438 18:39:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:44.438 18:39:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:44.438 18:39:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:21:44.438 18:39:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:21:44.438 18:39:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:44.438 18:39:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:44.438 18:39:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:21:44.438 18:39:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:21:44.438 18:39:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:44.438 18:39:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:44.438 18:39:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:21:44.438 18:39:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:21:44.438 18:39:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:44.438 18:39:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:44.438 18:39:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:44.438 18:39:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:44.438 18:39:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:21:44.438 18:39:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:21:44.438 18:39:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:44.438 18:39:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:44.438 18:39:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:44.438 18:39:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:44.438 18:39:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:21:44.438 18:39:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:21:44.438 18:39:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:21:44.438 18:39:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:21:44.438 18:39:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:21:44.438 18:39:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:21:44.438 18:39:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:21:44.438 18:39:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:21:44.438 18:39:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:21:44.438 18:39:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:21:44.438 18:39:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:21:44.438 18:39:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=92064 00:21:44.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:44.438 18:39:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 92064 00:21:44.438 18:39:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@827 -- # '[' -z 92064 ']' 00:21:44.438 18:39:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:21:44.438 18:39:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:44.438 18:39:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:44.438 18:39:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:44.438 18:39:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:44.438 18:39:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:44.439 [2024-07-23 18:39:44.373880] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:21:44.439 [2024-07-23 18:39:44.374004] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92064 ] 00:21:44.698 [2024-07-23 18:39:44.520735] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:44.698 [2024-07-23 18:39:44.565139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:45.266 18:39:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:45.266 18:39:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@860 -- # return 0 00:21:45.266 18:39:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:21:45.266 18:39:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:21:45.266 18:39:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:21:45.266 18:39:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:21:45.266 18:39:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:21:45.266 18:39:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:21:45.525 18:39:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:21:45.525 18:39:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:21:45.525 18:39:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:21:45.525 18:39:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1374 -- # local bdev_name=nvme0n1 00:21:45.525 18:39:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1375 -- # local bdev_info 00:21:45.525 18:39:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1376 -- # local bs 00:21:45.525 18:39:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1377 -- # local nb 00:21:45.525 18:39:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:21:45.525 18:39:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:21:45.525 { 00:21:45.525 "name": "nvme0n1", 00:21:45.525 "aliases": [ 00:21:45.525 "32c27096-7009-4e9e-9e6e-0fd6943982de" 00:21:45.525 ], 00:21:45.525 "product_name": "NVMe disk", 00:21:45.525 "block_size": 4096, 00:21:45.525 "num_blocks": 1310720, 00:21:45.525 "uuid": "32c27096-7009-4e9e-9e6e-0fd6943982de", 00:21:45.525 "assigned_rate_limits": { 00:21:45.525 "rw_ios_per_sec": 0, 00:21:45.525 "rw_mbytes_per_sec": 0, 00:21:45.525 "r_mbytes_per_sec": 0, 00:21:45.525 "w_mbytes_per_sec": 0 00:21:45.525 }, 00:21:45.525 "claimed": true, 00:21:45.525 "claim_type": "read_many_write_one", 00:21:45.525 "zoned": false, 00:21:45.525 "supported_io_types": { 00:21:45.525 "read": true, 00:21:45.525 "write": true, 00:21:45.525 "unmap": true, 00:21:45.525 "write_zeroes": true, 00:21:45.525 "flush": true, 00:21:45.525 "reset": true, 00:21:45.525 "compare": true, 00:21:45.525 "compare_and_write": false, 00:21:45.525 "abort": true, 00:21:45.525 "nvme_admin": true, 00:21:45.525 "nvme_io": true 00:21:45.525 }, 00:21:45.525 "driver_specific": { 00:21:45.525 "nvme": [ 00:21:45.525 { 00:21:45.525 "pci_address": "0000:00:11.0", 00:21:45.525 "trid": { 00:21:45.525 "trtype": "PCIe", 00:21:45.525 "traddr": "0000:00:11.0" 00:21:45.525 }, 00:21:45.525 "ctrlr_data": { 00:21:45.525 "cntlid": 0, 00:21:45.525 "vendor_id": "0x1b36", 00:21:45.525 "model_number": "QEMU NVMe Ctrl", 00:21:45.525 "serial_number": "12341", 00:21:45.525 "firmware_revision": "8.0.0", 00:21:45.525 "subnqn": "nqn.2019-08.org.qemu:12341", 00:21:45.525 "oacs": { 00:21:45.525 "security": 0, 00:21:45.525 "format": 1, 00:21:45.525 "firmware": 0, 00:21:45.525 "ns_manage": 1 00:21:45.525 }, 00:21:45.525 "multi_ctrlr": false, 00:21:45.525 "ana_reporting": false 00:21:45.525 }, 00:21:45.525 "vs": { 00:21:45.525 "nvme_version": "1.4" 00:21:45.525 }, 00:21:45.525 "ns_data": { 00:21:45.525 "id": 1, 00:21:45.525 "can_share": false 00:21:45.525 } 00:21:45.525 } 00:21:45.525 ], 00:21:45.525 "mp_policy": "active_passive" 00:21:45.525 } 00:21:45.525 } 00:21:45.525 ]' 00:21:45.525 18:39:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:21:45.785 18:39:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # bs=4096 00:21:45.785 18:39:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:21:45.785 18:39:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # nb=1310720 00:21:45.785 18:39:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bdev_size=5120 00:21:45.785 18:39:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # echo 5120 00:21:45.785 18:39:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:21:45.785 18:39:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:21:45.785 18:39:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:21:45.785 18:39:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:45.785 18:39:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:21:46.043 18:39:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=99fb0c21-6231-4ead-a846-bf92b76828b5 00:21:46.043 18:39:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:21:46.044 18:39:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 99fb0c21-6231-4ead-a846-bf92b76828b5 00:21:46.044 18:39:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:21:46.303 18:39:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=2bb06a59-9376-4cad-ab82-0e3263c2e228 00:21:46.303 18:39:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 2bb06a59-9376-4cad-ab82-0e3263c2e228 00:21:46.562 18:39:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=3b5d87f4-de9b-4bca-9d64-91286531bbb7 00:21:46.562 18:39:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:21:46.562 18:39:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 3b5d87f4-de9b-4bca-9d64-91286531bbb7 00:21:46.562 18:39:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:21:46.562 18:39:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:21:46.562 18:39:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=3b5d87f4-de9b-4bca-9d64-91286531bbb7 00:21:46.562 18:39:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:21:46.562 18:39:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 3b5d87f4-de9b-4bca-9d64-91286531bbb7 00:21:46.562 18:39:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1374 -- # local bdev_name=3b5d87f4-de9b-4bca-9d64-91286531bbb7 00:21:46.562 18:39:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1375 -- # local bdev_info 00:21:46.562 18:39:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1376 -- # local bs 00:21:46.562 18:39:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1377 -- # local nb 00:21:46.562 18:39:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3b5d87f4-de9b-4bca-9d64-91286531bbb7 00:21:46.562 18:39:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:21:46.562 { 00:21:46.562 "name": "3b5d87f4-de9b-4bca-9d64-91286531bbb7", 00:21:46.562 "aliases": [ 00:21:46.562 "lvs/nvme0n1p0" 00:21:46.562 ], 00:21:46.562 "product_name": "Logical Volume", 00:21:46.562 "block_size": 4096, 00:21:46.562 "num_blocks": 26476544, 00:21:46.562 "uuid": "3b5d87f4-de9b-4bca-9d64-91286531bbb7", 00:21:46.562 "assigned_rate_limits": { 00:21:46.562 "rw_ios_per_sec": 0, 00:21:46.562 "rw_mbytes_per_sec": 0, 00:21:46.562 "r_mbytes_per_sec": 0, 00:21:46.562 "w_mbytes_per_sec": 0 00:21:46.562 }, 00:21:46.562 "claimed": false, 00:21:46.562 "zoned": false, 00:21:46.562 "supported_io_types": { 00:21:46.562 "read": true, 00:21:46.562 "write": true, 00:21:46.562 "unmap": true, 00:21:46.562 "write_zeroes": true, 00:21:46.562 "flush": false, 00:21:46.562 "reset": true, 00:21:46.562 "compare": false, 00:21:46.562 "compare_and_write": false, 00:21:46.562 "abort": false, 00:21:46.562 "nvme_admin": false, 00:21:46.562 "nvme_io": false 00:21:46.562 }, 00:21:46.562 "driver_specific": { 00:21:46.562 "lvol": { 00:21:46.562 "lvol_store_uuid": "2bb06a59-9376-4cad-ab82-0e3263c2e228", 00:21:46.562 "base_bdev": "nvme0n1", 00:21:46.562 "thin_provision": true, 00:21:46.562 "num_allocated_clusters": 0, 00:21:46.562 "snapshot": false, 00:21:46.562 "clone": false, 00:21:46.562 "esnap_clone": false 00:21:46.562 } 00:21:46.562 } 00:21:46.562 } 00:21:46.562 ]' 00:21:46.821 18:39:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:21:46.821 18:39:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # bs=4096 00:21:46.821 18:39:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:21:46.821 18:39:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # nb=26476544 00:21:46.821 18:39:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:21:46.821 18:39:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # echo 103424 00:21:46.821 18:39:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:21:46.821 18:39:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:21:46.821 18:39:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:21:47.080 18:39:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:21:47.080 18:39:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:21:47.080 18:39:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 3b5d87f4-de9b-4bca-9d64-91286531bbb7 00:21:47.080 18:39:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1374 -- # local bdev_name=3b5d87f4-de9b-4bca-9d64-91286531bbb7 00:21:47.080 18:39:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1375 -- # local bdev_info 00:21:47.080 18:39:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1376 -- # local bs 00:21:47.080 18:39:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1377 -- # local nb 00:21:47.080 18:39:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3b5d87f4-de9b-4bca-9d64-91286531bbb7 00:21:47.080 18:39:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:21:47.080 { 00:21:47.080 "name": "3b5d87f4-de9b-4bca-9d64-91286531bbb7", 00:21:47.080 "aliases": [ 00:21:47.080 "lvs/nvme0n1p0" 00:21:47.080 ], 00:21:47.080 "product_name": "Logical Volume", 00:21:47.080 "block_size": 4096, 00:21:47.080 "num_blocks": 26476544, 00:21:47.080 "uuid": "3b5d87f4-de9b-4bca-9d64-91286531bbb7", 00:21:47.080 "assigned_rate_limits": { 00:21:47.080 "rw_ios_per_sec": 0, 00:21:47.080 "rw_mbytes_per_sec": 0, 00:21:47.080 "r_mbytes_per_sec": 0, 00:21:47.080 "w_mbytes_per_sec": 0 00:21:47.080 }, 00:21:47.080 "claimed": false, 00:21:47.080 "zoned": false, 00:21:47.080 "supported_io_types": { 00:21:47.080 "read": true, 00:21:47.080 "write": true, 00:21:47.080 "unmap": true, 00:21:47.080 "write_zeroes": true, 00:21:47.080 "flush": false, 00:21:47.080 "reset": true, 00:21:47.080 "compare": false, 00:21:47.080 "compare_and_write": false, 00:21:47.080 "abort": false, 00:21:47.080 "nvme_admin": false, 00:21:47.080 "nvme_io": false 00:21:47.080 }, 00:21:47.080 "driver_specific": { 00:21:47.080 "lvol": { 00:21:47.080 "lvol_store_uuid": "2bb06a59-9376-4cad-ab82-0e3263c2e228", 00:21:47.080 "base_bdev": "nvme0n1", 00:21:47.080 "thin_provision": true, 00:21:47.080 "num_allocated_clusters": 0, 00:21:47.080 "snapshot": false, 00:21:47.080 "clone": false, 00:21:47.080 "esnap_clone": false 00:21:47.080 } 00:21:47.080 } 00:21:47.080 } 00:21:47.080 ]' 00:21:47.080 18:39:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:21:47.339 18:39:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # bs=4096 00:21:47.339 18:39:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:21:47.339 18:39:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # nb=26476544 00:21:47.339 18:39:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:21:47.339 18:39:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # echo 103424 00:21:47.339 18:39:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:21:47.339 18:39:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:21:47.339 18:39:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:21:47.339 18:39:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 3b5d87f4-de9b-4bca-9d64-91286531bbb7 00:21:47.339 18:39:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1374 -- # local bdev_name=3b5d87f4-de9b-4bca-9d64-91286531bbb7 00:21:47.339 18:39:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1375 -- # local bdev_info 00:21:47.339 18:39:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1376 -- # local bs 00:21:47.339 18:39:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1377 -- # local nb 00:21:47.598 18:39:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3b5d87f4-de9b-4bca-9d64-91286531bbb7 00:21:47.598 18:39:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:21:47.598 { 00:21:47.598 "name": "3b5d87f4-de9b-4bca-9d64-91286531bbb7", 00:21:47.598 "aliases": [ 00:21:47.598 "lvs/nvme0n1p0" 00:21:47.598 ], 00:21:47.598 "product_name": "Logical Volume", 00:21:47.598 "block_size": 4096, 00:21:47.598 "num_blocks": 26476544, 00:21:47.598 "uuid": "3b5d87f4-de9b-4bca-9d64-91286531bbb7", 00:21:47.598 "assigned_rate_limits": { 00:21:47.598 "rw_ios_per_sec": 0, 00:21:47.598 "rw_mbytes_per_sec": 0, 00:21:47.598 "r_mbytes_per_sec": 0, 00:21:47.598 "w_mbytes_per_sec": 0 00:21:47.598 }, 00:21:47.598 "claimed": false, 00:21:47.598 "zoned": false, 00:21:47.598 "supported_io_types": { 00:21:47.598 "read": true, 00:21:47.598 "write": true, 00:21:47.598 "unmap": true, 00:21:47.598 "write_zeroes": true, 00:21:47.598 "flush": false, 00:21:47.598 "reset": true, 00:21:47.598 "compare": false, 00:21:47.598 "compare_and_write": false, 00:21:47.598 "abort": false, 00:21:47.598 "nvme_admin": false, 00:21:47.598 "nvme_io": false 00:21:47.598 }, 00:21:47.598 "driver_specific": { 00:21:47.598 "lvol": { 00:21:47.598 "lvol_store_uuid": "2bb06a59-9376-4cad-ab82-0e3263c2e228", 00:21:47.598 "base_bdev": "nvme0n1", 00:21:47.598 "thin_provision": true, 00:21:47.598 "num_allocated_clusters": 0, 00:21:47.598 "snapshot": false, 00:21:47.598 "clone": false, 00:21:47.598 "esnap_clone": false 00:21:47.598 } 00:21:47.598 } 00:21:47.598 } 00:21:47.598 ]' 00:21:47.598 18:39:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:21:47.598 18:39:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # bs=4096 00:21:47.598 18:39:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:21:47.858 18:39:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # nb=26476544 00:21:47.858 18:39:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:21:47.858 18:39:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # echo 103424 00:21:47.858 18:39:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:21:47.858 18:39:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 3b5d87f4-de9b-4bca-9d64-91286531bbb7 --l2p_dram_limit 10' 00:21:47.858 18:39:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:21:47.858 18:39:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:21:47.858 18:39:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:21:47.858 18:39:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 3b5d87f4-de9b-4bca-9d64-91286531bbb7 --l2p_dram_limit 10 -c nvc0n1p0 00:21:47.859 [2024-07-23 18:39:47.833591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.859 [2024-07-23 18:39:47.833652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:47.859 [2024-07-23 18:39:47.833668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:47.859 [2024-07-23 18:39:47.833677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.859 [2024-07-23 18:39:47.833744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.859 [2024-07-23 18:39:47.833755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:47.859 [2024-07-23 18:39:47.833772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:21:47.859 [2024-07-23 18:39:47.833781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.859 [2024-07-23 18:39:47.833810] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:47.859 [2024-07-23 18:39:47.834110] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:47.859 [2024-07-23 18:39:47.834137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.859 [2024-07-23 18:39:47.834148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:47.859 [2024-07-23 18:39:47.834158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.338 ms 00:21:47.859 [2024-07-23 18:39:47.834166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.859 [2024-07-23 18:39:47.834199] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 7b21931e-714e-41ab-a824-bda33f9430c9 00:21:47.859 [2024-07-23 18:39:47.835593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.859 [2024-07-23 18:39:47.835621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:21:47.859 [2024-07-23 18:39:47.835631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:21:47.859 [2024-07-23 18:39:47.835643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.859 [2024-07-23 18:39:47.842926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.859 [2024-07-23 18:39:47.842964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:47.859 [2024-07-23 18:39:47.842974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.227 ms 00:21:47.859 [2024-07-23 18:39:47.842983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.859 [2024-07-23 18:39:47.843060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.859 [2024-07-23 18:39:47.843079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:47.859 [2024-07-23 18:39:47.843087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:21:47.859 [2024-07-23 18:39:47.843095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.859 [2024-07-23 18:39:47.843153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.859 [2024-07-23 18:39:47.843164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:47.859 [2024-07-23 18:39:47.843172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:21:47.859 [2024-07-23 18:39:47.843189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.859 [2024-07-23 18:39:47.843215] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:47.859 [2024-07-23 18:39:47.844984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.859 [2024-07-23 18:39:47.845014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:47.859 [2024-07-23 18:39:47.845026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.778 ms 00:21:47.859 [2024-07-23 18:39:47.845033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.859 [2024-07-23 18:39:47.845068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.859 [2024-07-23 18:39:47.845076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:47.859 [2024-07-23 18:39:47.845086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:47.859 [2024-07-23 18:39:47.845094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.859 [2024-07-23 18:39:47.845122] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:21:47.859 [2024-07-23 18:39:47.845261] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:47.859 [2024-07-23 18:39:47.845279] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:47.859 [2024-07-23 18:39:47.845289] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:21:47.859 [2024-07-23 18:39:47.845302] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:47.859 [2024-07-23 18:39:47.845312] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:47.859 [2024-07-23 18:39:47.845321] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:47.859 [2024-07-23 18:39:47.845329] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:47.859 [2024-07-23 18:39:47.845340] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:47.859 [2024-07-23 18:39:47.845347] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:47.859 [2024-07-23 18:39:47.845365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.859 [2024-07-23 18:39:47.845373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:47.859 [2024-07-23 18:39:47.845383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.246 ms 00:21:47.859 [2024-07-23 18:39:47.845390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.859 [2024-07-23 18:39:47.845465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.859 [2024-07-23 18:39:47.845473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:47.859 [2024-07-23 18:39:47.845484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:21:47.859 [2024-07-23 18:39:47.845492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.859 [2024-07-23 18:39:47.845593] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:47.859 [2024-07-23 18:39:47.845603] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:47.859 [2024-07-23 18:39:47.845613] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:47.859 [2024-07-23 18:39:47.845621] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:47.859 [2024-07-23 18:39:47.845630] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:47.859 [2024-07-23 18:39:47.845637] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:47.859 [2024-07-23 18:39:47.845647] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:47.859 [2024-07-23 18:39:47.845654] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:47.859 [2024-07-23 18:39:47.845664] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:47.859 [2024-07-23 18:39:47.845671] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:47.859 [2024-07-23 18:39:47.845680] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:47.859 [2024-07-23 18:39:47.845688] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:47.859 [2024-07-23 18:39:47.845696] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:47.859 [2024-07-23 18:39:47.845711] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:47.859 [2024-07-23 18:39:47.845721] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:47.859 [2024-07-23 18:39:47.845728] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:47.859 [2024-07-23 18:39:47.845736] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:47.859 [2024-07-23 18:39:47.845743] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:47.859 [2024-07-23 18:39:47.845751] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:47.859 [2024-07-23 18:39:47.845758] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:47.859 [2024-07-23 18:39:47.845766] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:47.859 [2024-07-23 18:39:47.845772] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:47.859 [2024-07-23 18:39:47.845780] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:47.859 [2024-07-23 18:39:47.845787] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:47.859 [2024-07-23 18:39:47.845795] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:47.859 [2024-07-23 18:39:47.845802] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:47.859 [2024-07-23 18:39:47.845810] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:47.859 [2024-07-23 18:39:47.845817] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:47.859 [2024-07-23 18:39:47.845824] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:47.859 [2024-07-23 18:39:47.845831] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:47.859 [2024-07-23 18:39:47.845840] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:47.859 [2024-07-23 18:39:47.845846] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:47.859 [2024-07-23 18:39:47.845855] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:47.859 [2024-07-23 18:39:47.845862] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:47.859 [2024-07-23 18:39:47.845870] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:47.859 [2024-07-23 18:39:47.845876] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:47.859 [2024-07-23 18:39:47.845884] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:47.859 [2024-07-23 18:39:47.845891] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:47.859 [2024-07-23 18:39:47.845899] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:47.859 [2024-07-23 18:39:47.845906] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:47.859 [2024-07-23 18:39:47.845915] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:47.859 [2024-07-23 18:39:47.845922] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:47.859 [2024-07-23 18:39:47.845929] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:47.859 [2024-07-23 18:39:47.845936] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:47.859 [2024-07-23 18:39:47.845945] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:47.859 [2024-07-23 18:39:47.845952] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:47.860 [2024-07-23 18:39:47.845962] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:47.860 [2024-07-23 18:39:47.845973] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:47.860 [2024-07-23 18:39:47.845981] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:47.860 [2024-07-23 18:39:47.845988] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:47.860 [2024-07-23 18:39:47.845996] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:47.860 [2024-07-23 18:39:47.846003] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:47.860 [2024-07-23 18:39:47.846011] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:47.860 [2024-07-23 18:39:47.846022] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:47.860 [2024-07-23 18:39:47.846033] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:47.860 [2024-07-23 18:39:47.846057] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:47.860 [2024-07-23 18:39:47.846068] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:47.860 [2024-07-23 18:39:47.846075] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:47.860 [2024-07-23 18:39:47.846084] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:47.860 [2024-07-23 18:39:47.846096] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:47.860 [2024-07-23 18:39:47.846105] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:47.860 [2024-07-23 18:39:47.846112] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:47.860 [2024-07-23 18:39:47.846123] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:47.860 [2024-07-23 18:39:47.846130] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:47.860 [2024-07-23 18:39:47.846139] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:47.860 [2024-07-23 18:39:47.846146] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:47.860 [2024-07-23 18:39:47.846154] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:47.860 [2024-07-23 18:39:47.846161] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:47.860 [2024-07-23 18:39:47.846170] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:47.860 [2024-07-23 18:39:47.846177] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:47.860 [2024-07-23 18:39:47.846186] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:47.860 [2024-07-23 18:39:47.846194] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:47.860 [2024-07-23 18:39:47.846205] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:47.860 [2024-07-23 18:39:47.846213] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:47.860 [2024-07-23 18:39:47.846222] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:47.860 [2024-07-23 18:39:47.846231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.860 [2024-07-23 18:39:47.846240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:47.860 [2024-07-23 18:39:47.846256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.708 ms 00:21:47.860 [2024-07-23 18:39:47.846267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.860 [2024-07-23 18:39:47.846317] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:21:47.860 [2024-07-23 18:39:47.846328] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:21:52.052 [2024-07-23 18:39:51.683595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.052 [2024-07-23 18:39:51.683658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:21:52.052 [2024-07-23 18:39:51.683673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3844.672 ms 00:21:52.052 [2024-07-23 18:39:51.683682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.052 [2024-07-23 18:39:51.694392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.052 [2024-07-23 18:39:51.694443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:52.052 [2024-07-23 18:39:51.694455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.646 ms 00:21:52.052 [2024-07-23 18:39:51.694465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.052 [2024-07-23 18:39:51.694555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.052 [2024-07-23 18:39:51.694592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:52.052 [2024-07-23 18:39:51.694601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:21:52.052 [2024-07-23 18:39:51.694610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.052 [2024-07-23 18:39:51.704321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.052 [2024-07-23 18:39:51.704366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:52.053 [2024-07-23 18:39:51.704378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.669 ms 00:21:52.053 [2024-07-23 18:39:51.704397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.053 [2024-07-23 18:39:51.704433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.053 [2024-07-23 18:39:51.704443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:52.053 [2024-07-23 18:39:51.704450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:52.053 [2024-07-23 18:39:51.704459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.053 [2024-07-23 18:39:51.704902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.053 [2024-07-23 18:39:51.704916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:52.053 [2024-07-23 18:39:51.704925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.407 ms 00:21:52.053 [2024-07-23 18:39:51.704934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.053 [2024-07-23 18:39:51.705044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.053 [2024-07-23 18:39:51.705068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:52.053 [2024-07-23 18:39:51.705075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:21:52.053 [2024-07-23 18:39:51.705084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.053 [2024-07-23 18:39:51.711804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.053 [2024-07-23 18:39:51.711842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:52.053 [2024-07-23 18:39:51.711853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.711 ms 00:21:52.053 [2024-07-23 18:39:51.711862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.053 [2024-07-23 18:39:51.719535] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:52.053 [2024-07-23 18:39:51.722718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.053 [2024-07-23 18:39:51.722741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:52.053 [2024-07-23 18:39:51.722753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.802 ms 00:21:52.053 [2024-07-23 18:39:51.722761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.053 [2024-07-23 18:39:51.811822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.053 [2024-07-23 18:39:51.811861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:21:52.053 [2024-07-23 18:39:51.811889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 89.202 ms 00:21:52.053 [2024-07-23 18:39:51.811901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.053 [2024-07-23 18:39:51.812073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.053 [2024-07-23 18:39:51.812083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:52.053 [2024-07-23 18:39:51.812093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.135 ms 00:21:52.053 [2024-07-23 18:39:51.812101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.053 [2024-07-23 18:39:51.815928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.053 [2024-07-23 18:39:51.815967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:21:52.053 [2024-07-23 18:39:51.815981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.811 ms 00:21:52.053 [2024-07-23 18:39:51.815991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.053 [2024-07-23 18:39:51.818732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.053 [2024-07-23 18:39:51.818762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:21:52.053 [2024-07-23 18:39:51.818775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.708 ms 00:21:52.053 [2024-07-23 18:39:51.818782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.053 [2024-07-23 18:39:51.819055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.053 [2024-07-23 18:39:51.819066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:52.053 [2024-07-23 18:39:51.819077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.241 ms 00:21:52.053 [2024-07-23 18:39:51.819085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.053 [2024-07-23 18:39:51.865712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.053 [2024-07-23 18:39:51.865759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:21:52.053 [2024-07-23 18:39:51.865774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.688 ms 00:21:52.053 [2024-07-23 18:39:51.865785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.053 [2024-07-23 18:39:51.870189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.053 [2024-07-23 18:39:51.870229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:21:52.053 [2024-07-23 18:39:51.870243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.375 ms 00:21:52.053 [2024-07-23 18:39:51.870251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.053 [2024-07-23 18:39:51.873580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.053 [2024-07-23 18:39:51.873613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:21:52.053 [2024-07-23 18:39:51.873625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.288 ms 00:21:52.053 [2024-07-23 18:39:51.873632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.053 [2024-07-23 18:39:51.877012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.053 [2024-07-23 18:39:51.877043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:52.053 [2024-07-23 18:39:51.877056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.350 ms 00:21:52.053 [2024-07-23 18:39:51.877063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.053 [2024-07-23 18:39:51.877109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.053 [2024-07-23 18:39:51.877120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:52.053 [2024-07-23 18:39:51.877131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:52.053 [2024-07-23 18:39:51.877138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.053 [2024-07-23 18:39:51.877201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.053 [2024-07-23 18:39:51.877210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:52.053 [2024-07-23 18:39:51.877218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:21:52.053 [2024-07-23 18:39:51.877225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.053 [2024-07-23 18:39:51.878180] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4051.986 ms, result 0 00:21:52.053 { 00:21:52.053 "name": "ftl0", 00:21:52.053 "uuid": "7b21931e-714e-41ab-a824-bda33f9430c9" 00:21:52.053 } 00:21:52.053 18:39:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:21:52.053 18:39:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:21:52.053 18:39:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:21:52.053 18:39:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:21:52.053 18:39:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:21:52.312 /dev/nbd0 00:21:52.312 18:39:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:21:52.312 18:39:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:21:52.312 18:39:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@865 -- # local i 00:21:52.312 18:39:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:21:52.312 18:39:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:21:52.312 18:39:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:21:52.312 18:39:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@869 -- # break 00:21:52.312 18:39:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:21:52.312 18:39:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:21:52.312 18:39:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:21:52.312 1+0 records in 00:21:52.312 1+0 records out 00:21:52.312 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00043628 s, 9.4 MB/s 00:21:52.312 18:39:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:21:52.312 18:39:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@882 -- # size=4096 00:21:52.312 18:39:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:21:52.312 18:39:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:21:52.312 18:39:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@885 -- # return 0 00:21:52.312 18:39:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:21:52.572 [2024-07-23 18:39:52.383131] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:21:52.572 [2024-07-23 18:39:52.383247] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92200 ] 00:21:52.572 [2024-07-23 18:39:52.527270] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:52.572 [2024-07-23 18:39:52.575826] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:57.280  Copying: 244/1024 [MB] (244 MBps) Copying: 482/1024 [MB] (237 MBps) Copying: 718/1024 [MB] (236 MBps) Copying: 945/1024 [MB] (226 MBps) Copying: 1024/1024 [MB] (average 235 MBps) 00:21:57.280 00:21:57.280 18:39:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:21:59.186 18:39:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:21:59.186 [2024-07-23 18:39:59.036339] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:21:59.186 [2024-07-23 18:39:59.036451] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92276 ] 00:21:59.186 [2024-07-23 18:39:59.181676] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:59.186 [2024-07-23 18:39:59.227556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:47.913  Copying: 19/1024 [MB] (19 MBps) Copying: 39/1024 [MB] (19 MBps) Copying: 58/1024 [MB] (18 MBps) Copying: 77/1024 [MB] (19 MBps) Copying: 97/1024 [MB] (20 MBps) Copying: 117/1024 [MB] (19 MBps) Copying: 137/1024 [MB] (19 MBps) Copying: 158/1024 [MB] (20 MBps) Copying: 181/1024 [MB] (22 MBps) Copying: 202/1024 [MB] (21 MBps) Copying: 224/1024 [MB] (21 MBps) Copying: 245/1024 [MB] (21 MBps) Copying: 267/1024 [MB] (21 MBps) Copying: 289/1024 [MB] (21 MBps) Copying: 310/1024 [MB] (21 MBps) Copying: 332/1024 [MB] (21 MBps) Copying: 353/1024 [MB] (20 MBps) Copying: 374/1024 [MB] (20 MBps) Copying: 395/1024 [MB] (21 MBps) Copying: 416/1024 [MB] (21 MBps) Copying: 437/1024 [MB] (21 MBps) Copying: 459/1024 [MB] (21 MBps) Copying: 481/1024 [MB] (21 MBps) Copying: 503/1024 [MB] (22 MBps) Copying: 525/1024 [MB] (21 MBps) Copying: 547/1024 [MB] (21 MBps) Copying: 568/1024 [MB] (21 MBps) Copying: 590/1024 [MB] (21 MBps) Copying: 611/1024 [MB] (21 MBps) Copying: 632/1024 [MB] (20 MBps) Copying: 652/1024 [MB] (20 MBps) Copying: 673/1024 [MB] (20 MBps) Copying: 694/1024 [MB] (20 MBps) Copying: 715/1024 [MB] (21 MBps) Copying: 736/1024 [MB] (21 MBps) Copying: 758/1024 [MB] (21 MBps) Copying: 779/1024 [MB] (21 MBps) Copying: 800/1024 [MB] (21 MBps) Copying: 822/1024 [MB] (21 MBps) Copying: 843/1024 [MB] (21 MBps) Copying: 865/1024 [MB] (21 MBps) Copying: 886/1024 [MB] (21 MBps) Copying: 908/1024 [MB] (21 MBps) Copying: 929/1024 [MB] (21 MBps) Copying: 950/1024 [MB] (20 MBps) Copying: 971/1024 [MB] (20 MBps) Copying: 991/1024 [MB] (20 MBps) Copying: 1012/1024 [MB] (20 MBps) Copying: 1024/1024 [MB] (average 21 MBps) 00:22:47.913 00:22:47.913 18:40:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:22:47.913 18:40:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:22:48.173 18:40:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:22:48.433 [2024-07-23 18:40:48.326500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.433 [2024-07-23 18:40:48.326556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:48.433 [2024-07-23 18:40:48.326582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:48.433 [2024-07-23 18:40:48.326594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.433 [2024-07-23 18:40:48.326619] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:48.433 [2024-07-23 18:40:48.327346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.433 [2024-07-23 18:40:48.327362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:48.434 [2024-07-23 18:40:48.327378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.699 ms 00:22:48.434 [2024-07-23 18:40:48.327385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.434 [2024-07-23 18:40:48.329412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.434 [2024-07-23 18:40:48.329449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:48.434 [2024-07-23 18:40:48.329462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.998 ms 00:22:48.434 [2024-07-23 18:40:48.329470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.434 [2024-07-23 18:40:48.346857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.434 [2024-07-23 18:40:48.346900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:48.434 [2024-07-23 18:40:48.346916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.399 ms 00:22:48.434 [2024-07-23 18:40:48.346924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.434 [2024-07-23 18:40:48.352065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.434 [2024-07-23 18:40:48.352105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:48.434 [2024-07-23 18:40:48.352117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.105 ms 00:22:48.434 [2024-07-23 18:40:48.352140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.434 [2024-07-23 18:40:48.353742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.434 [2024-07-23 18:40:48.353773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:48.434 [2024-07-23 18:40:48.353787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.537 ms 00:22:48.434 [2024-07-23 18:40:48.353795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.434 [2024-07-23 18:40:48.359259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.434 [2024-07-23 18:40:48.359295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:48.434 [2024-07-23 18:40:48.359306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.439 ms 00:22:48.434 [2024-07-23 18:40:48.359314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.434 [2024-07-23 18:40:48.359423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.434 [2024-07-23 18:40:48.359432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:48.434 [2024-07-23 18:40:48.359442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:22:48.434 [2024-07-23 18:40:48.359455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.434 [2024-07-23 18:40:48.361651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.434 [2024-07-23 18:40:48.361685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:22:48.434 [2024-07-23 18:40:48.361697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.180 ms 00:22:48.434 [2024-07-23 18:40:48.361705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.434 [2024-07-23 18:40:48.363375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.434 [2024-07-23 18:40:48.363403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:22:48.434 [2024-07-23 18:40:48.363415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.637 ms 00:22:48.434 [2024-07-23 18:40:48.363422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.434 [2024-07-23 18:40:48.364676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.434 [2024-07-23 18:40:48.364704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:48.434 [2024-07-23 18:40:48.364715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.227 ms 00:22:48.434 [2024-07-23 18:40:48.364733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.434 [2024-07-23 18:40:48.365790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.434 [2024-07-23 18:40:48.365819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:48.434 [2024-07-23 18:40:48.365830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.988 ms 00:22:48.434 [2024-07-23 18:40:48.365837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.434 [2024-07-23 18:40:48.365865] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:48.434 [2024-07-23 18:40:48.365880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:48.434 [2024-07-23 18:40:48.365893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:48.434 [2024-07-23 18:40:48.365901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:48.434 [2024-07-23 18:40:48.365910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:48.434 [2024-07-23 18:40:48.365918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:48.434 [2024-07-23 18:40:48.365930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:48.434 [2024-07-23 18:40:48.365937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:48.434 [2024-07-23 18:40:48.365947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:48.434 [2024-07-23 18:40:48.365955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:48.434 [2024-07-23 18:40:48.365964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:48.434 [2024-07-23 18:40:48.365972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:48.434 [2024-07-23 18:40:48.365981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:48.434 [2024-07-23 18:40:48.365988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:48.434 [2024-07-23 18:40:48.365998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:48.434 [2024-07-23 18:40:48.366006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:48.434 [2024-07-23 18:40:48.366015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:48.434 [2024-07-23 18:40:48.366022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:48.434 [2024-07-23 18:40:48.366031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:48.434 [2024-07-23 18:40:48.366039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:48.434 [2024-07-23 18:40:48.366048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:48.434 [2024-07-23 18:40:48.366055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:48.434 [2024-07-23 18:40:48.366067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:48.434 [2024-07-23 18:40:48.366075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:48.434 [2024-07-23 18:40:48.366085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:48.434 [2024-07-23 18:40:48.366094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:48.434 [2024-07-23 18:40:48.366103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:48.434 [2024-07-23 18:40:48.366111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:48.434 [2024-07-23 18:40:48.366121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:48.434 [2024-07-23 18:40:48.366129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:48.434 [2024-07-23 18:40:48.366138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:48.434 [2024-07-23 18:40:48.366146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:48.434 [2024-07-23 18:40:48.366155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:48.434 [2024-07-23 18:40:48.366163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:48.434 [2024-07-23 18:40:48.366172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:48.434 [2024-07-23 18:40:48.366179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:48.434 [2024-07-23 18:40:48.366188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:48.434 [2024-07-23 18:40:48.366196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:48.434 [2024-07-23 18:40:48.366207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:48.434 [2024-07-23 18:40:48.366214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:48.434 [2024-07-23 18:40:48.366224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:48.434 [2024-07-23 18:40:48.366232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:48.434 [2024-07-23 18:40:48.366241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:48.434 [2024-07-23 18:40:48.366248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:48.434 [2024-07-23 18:40:48.366257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:48.434 [2024-07-23 18:40:48.366264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:48.434 [2024-07-23 18:40:48.366274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:48.434 [2024-07-23 18:40:48.366281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:48.434 [2024-07-23 18:40:48.366292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:48.434 [2024-07-23 18:40:48.366299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:48.434 [2024-07-23 18:40:48.366309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:48.434 [2024-07-23 18:40:48.366316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:48.434 [2024-07-23 18:40:48.366326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:48.434 [2024-07-23 18:40:48.366333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:48.434 [2024-07-23 18:40:48.366344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:48.434 [2024-07-23 18:40:48.366352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:48.434 [2024-07-23 18:40:48.366362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:48.434 [2024-07-23 18:40:48.366370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:48.434 [2024-07-23 18:40:48.366380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:48.434 [2024-07-23 18:40:48.366388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:48.435 [2024-07-23 18:40:48.366397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:48.435 [2024-07-23 18:40:48.366405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:48.435 [2024-07-23 18:40:48.366414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:48.435 [2024-07-23 18:40:48.366422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:48.435 [2024-07-23 18:40:48.366432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:48.435 [2024-07-23 18:40:48.366439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:48.435 [2024-07-23 18:40:48.366449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:48.435 [2024-07-23 18:40:48.366456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:48.435 [2024-07-23 18:40:48.366470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:48.435 [2024-07-23 18:40:48.366478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:48.435 [2024-07-23 18:40:48.366489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:48.435 [2024-07-23 18:40:48.366496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:48.435 [2024-07-23 18:40:48.366507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:48.435 [2024-07-23 18:40:48.366525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:48.435 [2024-07-23 18:40:48.366535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:48.435 [2024-07-23 18:40:48.366542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:48.435 [2024-07-23 18:40:48.366551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:48.435 [2024-07-23 18:40:48.366558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:48.435 [2024-07-23 18:40:48.366567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:48.435 [2024-07-23 18:40:48.366574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:48.435 [2024-07-23 18:40:48.366595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:48.435 [2024-07-23 18:40:48.366603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:48.435 [2024-07-23 18:40:48.366628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:48.435 [2024-07-23 18:40:48.366635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:48.435 [2024-07-23 18:40:48.366663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:48.435 [2024-07-23 18:40:48.366671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:48.435 [2024-07-23 18:40:48.366682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:48.435 [2024-07-23 18:40:48.366689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:48.435 [2024-07-23 18:40:48.366698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:48.435 [2024-07-23 18:40:48.366705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:48.435 [2024-07-23 18:40:48.366714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:48.435 [2024-07-23 18:40:48.366722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:48.435 [2024-07-23 18:40:48.366730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:48.435 [2024-07-23 18:40:48.366738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:48.435 [2024-07-23 18:40:48.366747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:48.435 [2024-07-23 18:40:48.366755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:48.435 [2024-07-23 18:40:48.366764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:48.435 [2024-07-23 18:40:48.366771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:48.435 [2024-07-23 18:40:48.366780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:48.435 [2024-07-23 18:40:48.366787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:48.435 [2024-07-23 18:40:48.366797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:48.435 [2024-07-23 18:40:48.366811] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:48.435 [2024-07-23 18:40:48.366822] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7b21931e-714e-41ab-a824-bda33f9430c9 00:22:48.435 [2024-07-23 18:40:48.366830] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:48.435 [2024-07-23 18:40:48.366840] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:48.435 [2024-07-23 18:40:48.366846] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:48.435 [2024-07-23 18:40:48.366855] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:48.435 [2024-07-23 18:40:48.366872] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:48.435 [2024-07-23 18:40:48.366882] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:48.435 [2024-07-23 18:40:48.366889] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:48.435 [2024-07-23 18:40:48.366897] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:48.435 [2024-07-23 18:40:48.366903] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:48.435 [2024-07-23 18:40:48.366912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.435 [2024-07-23 18:40:48.366920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:48.435 [2024-07-23 18:40:48.366930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.050 ms 00:22:48.435 [2024-07-23 18:40:48.366938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.435 [2024-07-23 18:40:48.368667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.435 [2024-07-23 18:40:48.368695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:48.435 [2024-07-23 18:40:48.368708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.704 ms 00:22:48.435 [2024-07-23 18:40:48.368715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.435 [2024-07-23 18:40:48.368819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.435 [2024-07-23 18:40:48.368828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:48.435 [2024-07-23 18:40:48.368840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:22:48.435 [2024-07-23 18:40:48.368855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.435 [2024-07-23 18:40:48.375167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:48.435 [2024-07-23 18:40:48.375190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:48.435 [2024-07-23 18:40:48.375201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:48.435 [2024-07-23 18:40:48.375216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.435 [2024-07-23 18:40:48.375271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:48.435 [2024-07-23 18:40:48.375280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:48.435 [2024-07-23 18:40:48.375291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:48.435 [2024-07-23 18:40:48.375299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.435 [2024-07-23 18:40:48.375366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:48.435 [2024-07-23 18:40:48.375377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:48.435 [2024-07-23 18:40:48.375388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:48.435 [2024-07-23 18:40:48.375396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.435 [2024-07-23 18:40:48.375414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:48.435 [2024-07-23 18:40:48.375422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:48.435 [2024-07-23 18:40:48.375437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:48.435 [2024-07-23 18:40:48.375447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.435 [2024-07-23 18:40:48.388999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:48.435 [2024-07-23 18:40:48.389127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:48.435 [2024-07-23 18:40:48.389162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:48.435 [2024-07-23 18:40:48.389182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.435 [2024-07-23 18:40:48.397121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:48.435 [2024-07-23 18:40:48.397234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:48.435 [2024-07-23 18:40:48.397268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:48.435 [2024-07-23 18:40:48.397289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.435 [2024-07-23 18:40:48.397386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:48.435 [2024-07-23 18:40:48.397411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:48.435 [2024-07-23 18:40:48.397465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:48.435 [2024-07-23 18:40:48.397486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.435 [2024-07-23 18:40:48.397552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:48.435 [2024-07-23 18:40:48.397630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:48.435 [2024-07-23 18:40:48.397665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:48.435 [2024-07-23 18:40:48.397686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.435 [2024-07-23 18:40:48.397826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:48.435 [2024-07-23 18:40:48.397866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:48.435 [2024-07-23 18:40:48.397899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:48.435 [2024-07-23 18:40:48.397929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.435 [2024-07-23 18:40:48.397988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:48.435 [2024-07-23 18:40:48.398024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:48.436 [2024-07-23 18:40:48.398060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:48.436 [2024-07-23 18:40:48.398088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.436 [2024-07-23 18:40:48.398147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:48.436 [2024-07-23 18:40:48.398181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:48.436 [2024-07-23 18:40:48.398216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:48.436 [2024-07-23 18:40:48.398243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.436 [2024-07-23 18:40:48.398304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:48.436 [2024-07-23 18:40:48.398338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:48.436 [2024-07-23 18:40:48.398379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:48.436 [2024-07-23 18:40:48.398409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.436 [2024-07-23 18:40:48.398596] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 72.176 ms, result 0 00:22:48.436 true 00:22:48.436 18:40:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 92064 00:22:48.436 18:40:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid92064 00:22:48.436 18:40:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:22:48.695 [2024-07-23 18:40:48.495150] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:22:48.695 [2024-07-23 18:40:48.495382] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92788 ] 00:22:48.695 [2024-07-23 18:40:48.641507] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:48.695 [2024-07-23 18:40:48.686734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:53.154  Copying: 249/1024 [MB] (249 MBps) Copying: 500/1024 [MB] (251 MBps) Copying: 748/1024 [MB] (247 MBps) Copying: 987/1024 [MB] (238 MBps) Copying: 1024/1024 [MB] (average 247 MBps) 00:22:53.154 00:22:53.154 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 92064 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:22:53.154 18:40:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:53.154 [2024-07-23 18:40:53.201107] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:22:53.154 [2024-07-23 18:40:53.201241] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92841 ] 00:22:53.428 [2024-07-23 18:40:53.349375] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:53.428 [2024-07-23 18:40:53.394464] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:53.696 [2024-07-23 18:40:53.493001] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:53.696 [2024-07-23 18:40:53.493075] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:53.696 [2024-07-23 18:40:53.552947] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:22:53.696 [2024-07-23 18:40:53.553169] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:22:53.696 [2024-07-23 18:40:53.553407] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:22:53.955 [2024-07-23 18:40:53.837826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.955 [2024-07-23 18:40:53.837876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:53.955 [2024-07-23 18:40:53.837898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:53.955 [2024-07-23 18:40:53.837906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.955 [2024-07-23 18:40:53.837961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.955 [2024-07-23 18:40:53.837971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:53.955 [2024-07-23 18:40:53.837979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:22:53.955 [2024-07-23 18:40:53.837986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.955 [2024-07-23 18:40:53.838018] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:53.955 [2024-07-23 18:40:53.838222] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:53.955 [2024-07-23 18:40:53.838245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.955 [2024-07-23 18:40:53.838262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:53.955 [2024-07-23 18:40:53.838279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.233 ms 00:22:53.955 [2024-07-23 18:40:53.838286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.955 [2024-07-23 18:40:53.839698] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:53.955 [2024-07-23 18:40:53.842270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.955 [2024-07-23 18:40:53.842302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:53.955 [2024-07-23 18:40:53.842313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.578 ms 00:22:53.955 [2024-07-23 18:40:53.842320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.955 [2024-07-23 18:40:53.842398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.955 [2024-07-23 18:40:53.842409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:53.955 [2024-07-23 18:40:53.842420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:22:53.955 [2024-07-23 18:40:53.842438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.955 [2024-07-23 18:40:53.849003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.955 [2024-07-23 18:40:53.849034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:53.955 [2024-07-23 18:40:53.849043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.509 ms 00:22:53.955 [2024-07-23 18:40:53.849061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.955 [2024-07-23 18:40:53.849153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.955 [2024-07-23 18:40:53.849165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:53.955 [2024-07-23 18:40:53.849173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:22:53.955 [2024-07-23 18:40:53.849193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.955 [2024-07-23 18:40:53.849244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.955 [2024-07-23 18:40:53.849253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:53.955 [2024-07-23 18:40:53.849260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:22:53.955 [2024-07-23 18:40:53.849266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.955 [2024-07-23 18:40:53.849290] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:53.955 [2024-07-23 18:40:53.850893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.955 [2024-07-23 18:40:53.850928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:53.955 [2024-07-23 18:40:53.850937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.613 ms 00:22:53.955 [2024-07-23 18:40:53.850948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.955 [2024-07-23 18:40:53.850985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.955 [2024-07-23 18:40:53.850994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:53.955 [2024-07-23 18:40:53.851002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:53.955 [2024-07-23 18:40:53.851009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.955 [2024-07-23 18:40:53.851029] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:53.955 [2024-07-23 18:40:53.851048] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:53.955 [2024-07-23 18:40:53.851097] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:53.955 [2024-07-23 18:40:53.851115] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:22:53.955 [2024-07-23 18:40:53.851211] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:53.955 [2024-07-23 18:40:53.851221] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:53.955 [2024-07-23 18:40:53.851230] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:22:53.955 [2024-07-23 18:40:53.851246] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:53.955 [2024-07-23 18:40:53.851254] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:53.955 [2024-07-23 18:40:53.851262] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:53.955 [2024-07-23 18:40:53.851276] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:53.955 [2024-07-23 18:40:53.851283] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:53.955 [2024-07-23 18:40:53.851292] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:53.955 [2024-07-23 18:40:53.851300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.955 [2024-07-23 18:40:53.851306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:53.955 [2024-07-23 18:40:53.851314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:22:53.955 [2024-07-23 18:40:53.851321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.955 [2024-07-23 18:40:53.851394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.955 [2024-07-23 18:40:53.851402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:53.955 [2024-07-23 18:40:53.851409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:22:53.955 [2024-07-23 18:40:53.851416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.955 [2024-07-23 18:40:53.851516] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:53.955 [2024-07-23 18:40:53.851526] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:53.955 [2024-07-23 18:40:53.851534] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:53.955 [2024-07-23 18:40:53.851541] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:53.955 [2024-07-23 18:40:53.851549] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:53.955 [2024-07-23 18:40:53.851556] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:53.955 [2024-07-23 18:40:53.851563] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:53.955 [2024-07-23 18:40:53.851586] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:53.955 [2024-07-23 18:40:53.851593] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:53.955 [2024-07-23 18:40:53.851600] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:53.955 [2024-07-23 18:40:53.851606] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:53.955 [2024-07-23 18:40:53.851612] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:53.955 [2024-07-23 18:40:53.851623] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:53.955 [2024-07-23 18:40:53.851630] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:53.955 [2024-07-23 18:40:53.851636] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:53.955 [2024-07-23 18:40:53.851642] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:53.955 [2024-07-23 18:40:53.851648] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:53.955 [2024-07-23 18:40:53.851654] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:53.955 [2024-07-23 18:40:53.851660] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:53.955 [2024-07-23 18:40:53.851667] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:53.955 [2024-07-23 18:40:53.851674] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:53.955 [2024-07-23 18:40:53.851680] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:53.955 [2024-07-23 18:40:53.851686] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:53.955 [2024-07-23 18:40:53.851692] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:53.955 [2024-07-23 18:40:53.851698] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:53.955 [2024-07-23 18:40:53.851704] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:53.955 [2024-07-23 18:40:53.851710] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:53.955 [2024-07-23 18:40:53.851717] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:53.955 [2024-07-23 18:40:53.851729] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:53.955 [2024-07-23 18:40:53.851738] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:53.955 [2024-07-23 18:40:53.851744] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:53.955 [2024-07-23 18:40:53.851751] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:53.955 [2024-07-23 18:40:53.851757] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:53.955 [2024-07-23 18:40:53.851763] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:53.955 [2024-07-23 18:40:53.851769] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:53.955 [2024-07-23 18:40:53.851775] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:53.955 [2024-07-23 18:40:53.851783] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:53.955 [2024-07-23 18:40:53.851789] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:53.955 [2024-07-23 18:40:53.851795] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:53.955 [2024-07-23 18:40:53.851802] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:53.955 [2024-07-23 18:40:53.851808] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:53.955 [2024-07-23 18:40:53.851814] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:53.955 [2024-07-23 18:40:53.851820] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:53.955 [2024-07-23 18:40:53.851826] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:53.955 [2024-07-23 18:40:53.851835] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:53.956 [2024-07-23 18:40:53.851841] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:53.956 [2024-07-23 18:40:53.851847] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:53.956 [2024-07-23 18:40:53.851853] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:53.956 [2024-07-23 18:40:53.851860] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:53.956 [2024-07-23 18:40:53.851865] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:53.956 [2024-07-23 18:40:53.851872] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:53.956 [2024-07-23 18:40:53.851879] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:53.956 [2024-07-23 18:40:53.851885] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:53.956 [2024-07-23 18:40:53.851892] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:53.956 [2024-07-23 18:40:53.851900] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:53.956 [2024-07-23 18:40:53.851907] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:53.956 [2024-07-23 18:40:53.851914] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:53.956 [2024-07-23 18:40:53.851920] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:53.956 [2024-07-23 18:40:53.851926] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:53.956 [2024-07-23 18:40:53.851934] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:53.956 [2024-07-23 18:40:53.851958] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:53.956 [2024-07-23 18:40:53.851966] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:53.956 [2024-07-23 18:40:53.851973] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:53.956 [2024-07-23 18:40:53.851980] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:53.956 [2024-07-23 18:40:53.851987] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:53.956 [2024-07-23 18:40:53.851994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:53.956 [2024-07-23 18:40:53.852001] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:53.956 [2024-07-23 18:40:53.852008] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:53.956 [2024-07-23 18:40:53.852014] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:53.956 [2024-07-23 18:40:53.852031] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:53.956 [2024-07-23 18:40:53.852048] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:53.956 [2024-07-23 18:40:53.852056] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:53.956 [2024-07-23 18:40:53.852064] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:53.956 [2024-07-23 18:40:53.852072] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:53.956 [2024-07-23 18:40:53.852079] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:53.956 [2024-07-23 18:40:53.852088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.956 [2024-07-23 18:40:53.852098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:53.956 [2024-07-23 18:40:53.852105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.623 ms 00:22:53.956 [2024-07-23 18:40:53.852112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.956 [2024-07-23 18:40:53.875035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.956 [2024-07-23 18:40:53.875079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:53.956 [2024-07-23 18:40:53.875094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.914 ms 00:22:53.956 [2024-07-23 18:40:53.875110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.956 [2024-07-23 18:40:53.875221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.956 [2024-07-23 18:40:53.875232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:53.956 [2024-07-23 18:40:53.875242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:22:53.956 [2024-07-23 18:40:53.875268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.956 [2024-07-23 18:40:53.885347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.956 [2024-07-23 18:40:53.885380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:53.956 [2024-07-23 18:40:53.885393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.022 ms 00:22:53.956 [2024-07-23 18:40:53.885403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.956 [2024-07-23 18:40:53.885450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.956 [2024-07-23 18:40:53.885460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:53.956 [2024-07-23 18:40:53.885469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:53.956 [2024-07-23 18:40:53.885478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.956 [2024-07-23 18:40:53.885956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.956 [2024-07-23 18:40:53.885976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:53.956 [2024-07-23 18:40:53.885998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.421 ms 00:22:53.956 [2024-07-23 18:40:53.886008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.956 [2024-07-23 18:40:53.886135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.956 [2024-07-23 18:40:53.886159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:53.956 [2024-07-23 18:40:53.886169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:22:53.956 [2024-07-23 18:40:53.886177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.956 [2024-07-23 18:40:53.892329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.956 [2024-07-23 18:40:53.892410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:53.956 [2024-07-23 18:40:53.892440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.139 ms 00:22:53.956 [2024-07-23 18:40:53.892460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.956 [2024-07-23 18:40:53.895062] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:53.956 [2024-07-23 18:40:53.895156] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:53.956 [2024-07-23 18:40:53.895194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.956 [2024-07-23 18:40:53.895217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:53.956 [2024-07-23 18:40:53.895238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.615 ms 00:22:53.956 [2024-07-23 18:40:53.895256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.956 [2024-07-23 18:40:53.909029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.956 [2024-07-23 18:40:53.909106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:53.956 [2024-07-23 18:40:53.909160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.749 ms 00:22:53.956 [2024-07-23 18:40:53.909181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.956 [2024-07-23 18:40:53.911426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.956 [2024-07-23 18:40:53.911512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:53.956 [2024-07-23 18:40:53.911540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.188 ms 00:22:53.956 [2024-07-23 18:40:53.911559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.956 [2024-07-23 18:40:53.913140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.956 [2024-07-23 18:40:53.913194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:53.956 [2024-07-23 18:40:53.913236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.529 ms 00:22:53.956 [2024-07-23 18:40:53.913255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.956 [2024-07-23 18:40:53.913600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.956 [2024-07-23 18:40:53.913652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:53.956 [2024-07-23 18:40:53.913684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.256 ms 00:22:53.956 [2024-07-23 18:40:53.913704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.956 [2024-07-23 18:40:53.935940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.956 [2024-07-23 18:40:53.936083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:53.956 [2024-07-23 18:40:53.936115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.217 ms 00:22:53.956 [2024-07-23 18:40:53.936136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.956 [2024-07-23 18:40:53.942233] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:53.956 [2024-07-23 18:40:53.945597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.956 [2024-07-23 18:40:53.945670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:53.956 [2024-07-23 18:40:53.945697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.419 ms 00:22:53.956 [2024-07-23 18:40:53.945730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.956 [2024-07-23 18:40:53.945834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.956 [2024-07-23 18:40:53.945877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:53.956 [2024-07-23 18:40:53.945906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:53.956 [2024-07-23 18:40:53.945936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.956 [2024-07-23 18:40:53.946012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.956 [2024-07-23 18:40:53.946060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:53.956 [2024-07-23 18:40:53.946090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:22:53.956 [2024-07-23 18:40:53.946109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.957 [2024-07-23 18:40:53.946149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.957 [2024-07-23 18:40:53.946181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:53.957 [2024-07-23 18:40:53.946209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:53.957 [2024-07-23 18:40:53.946248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.957 [2024-07-23 18:40:53.946320] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:53.957 [2024-07-23 18:40:53.946355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.957 [2024-07-23 18:40:53.946392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:53.957 [2024-07-23 18:40:53.946423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:22:53.957 [2024-07-23 18:40:53.946458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.957 [2024-07-23 18:40:53.950121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.957 [2024-07-23 18:40:53.950191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:53.957 [2024-07-23 18:40:53.950223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.609 ms 00:22:53.957 [2024-07-23 18:40:53.950242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.957 [2024-07-23 18:40:53.950342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.957 [2024-07-23 18:40:53.950386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:53.957 [2024-07-23 18:40:53.950414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:22:53.957 [2024-07-23 18:40:53.950440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.957 [2024-07-23 18:40:53.951486] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 113.474 ms, result 0 00:23:30.949  Copying: 24/1024 [MB] (24 MBps) Copying: 48/1024 [MB] (24 MBps) Copying: 72/1024 [MB] (24 MBps) Copying: 97/1024 [MB] (25 MBps) Copying: 125/1024 [MB] (27 MBps) Copying: 153/1024 [MB] (27 MBps) Copying: 180/1024 [MB] (27 MBps) Copying: 208/1024 [MB] (27 MBps) Copying: 236/1024 [MB] (27 MBps) Copying: 263/1024 [MB] (27 MBps) Copying: 291/1024 [MB] (27 MBps) Copying: 319/1024 [MB] (27 MBps) Copying: 348/1024 [MB] (28 MBps) Copying: 376/1024 [MB] (28 MBps) Copying: 404/1024 [MB] (27 MBps) Copying: 433/1024 [MB] (29 MBps) Copying: 462/1024 [MB] (28 MBps) Copying: 490/1024 [MB] (28 MBps) Copying: 518/1024 [MB] (28 MBps) Copying: 546/1024 [MB] (27 MBps) Copying: 575/1024 [MB] (29 MBps) Copying: 605/1024 [MB] (29 MBps) Copying: 635/1024 [MB] (30 MBps) Copying: 665/1024 [MB] (30 MBps) Copying: 695/1024 [MB] (29 MBps) Copying: 725/1024 [MB] (30 MBps) Copying: 755/1024 [MB] (29 MBps) Copying: 784/1024 [MB] (28 MBps) Copying: 812/1024 [MB] (28 MBps) Copying: 841/1024 [MB] (29 MBps) Copying: 870/1024 [MB] (29 MBps) Copying: 900/1024 [MB] (29 MBps) Copying: 930/1024 [MB] (29 MBps) Copying: 960/1024 [MB] (29 MBps) Copying: 989/1024 [MB] (29 MBps) Copying: 1019/1024 [MB] (30 MBps) Copying: 1024/1024 [MB] (average 27 MBps)[2024-07-23 18:41:30.799852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.949 [2024-07-23 18:41:30.799946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:30.949 [2024-07-23 18:41:30.799963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:30.949 [2024-07-23 18:41:30.799973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.949 [2024-07-23 18:41:30.802259] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:30.949 [2024-07-23 18:41:30.806619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.949 [2024-07-23 18:41:30.806664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:30.949 [2024-07-23 18:41:30.806675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.321 ms 00:23:30.949 [2024-07-23 18:41:30.806691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.949 [2024-07-23 18:41:30.815105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.949 [2024-07-23 18:41:30.815141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:30.949 [2024-07-23 18:41:30.815153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.901 ms 00:23:30.949 [2024-07-23 18:41:30.815160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.949 [2024-07-23 18:41:30.837730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.949 [2024-07-23 18:41:30.837793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:30.949 [2024-07-23 18:41:30.837809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.589 ms 00:23:30.949 [2024-07-23 18:41:30.837818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.949 [2024-07-23 18:41:30.842740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.949 [2024-07-23 18:41:30.842768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:30.949 [2024-07-23 18:41:30.842777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.896 ms 00:23:30.949 [2024-07-23 18:41:30.842785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.949 [2024-07-23 18:41:30.844687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.949 [2024-07-23 18:41:30.844717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:30.949 [2024-07-23 18:41:30.844726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.858 ms 00:23:30.949 [2024-07-23 18:41:30.844733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.949 [2024-07-23 18:41:30.848781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.949 [2024-07-23 18:41:30.848814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:30.949 [2024-07-23 18:41:30.848823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.033 ms 00:23:30.949 [2024-07-23 18:41:30.848843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.949 [2024-07-23 18:41:30.948543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.949 [2024-07-23 18:41:30.948605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:30.949 [2024-07-23 18:41:30.948619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 99.861 ms 00:23:30.949 [2024-07-23 18:41:30.948627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.949 [2024-07-23 18:41:30.950657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.949 [2024-07-23 18:41:30.950685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:23:30.949 [2024-07-23 18:41:30.950694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.016 ms 00:23:30.949 [2024-07-23 18:41:30.950702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.949 [2024-07-23 18:41:30.952243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.949 [2024-07-23 18:41:30.952273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:23:30.949 [2024-07-23 18:41:30.952282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.520 ms 00:23:30.949 [2024-07-23 18:41:30.952290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.949 [2024-07-23 18:41:30.953501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.949 [2024-07-23 18:41:30.953533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:30.949 [2024-07-23 18:41:30.953541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.191 ms 00:23:30.949 [2024-07-23 18:41:30.953549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.949 [2024-07-23 18:41:30.954628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.949 [2024-07-23 18:41:30.954656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:30.949 [2024-07-23 18:41:30.954664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.027 ms 00:23:30.949 [2024-07-23 18:41:30.954670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.949 [2024-07-23 18:41:30.954691] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:30.949 [2024-07-23 18:41:30.954720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 105728 / 261120 wr_cnt: 1 state: open 00:23:30.949 [2024-07-23 18:41:30.954731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:30.949 [2024-07-23 18:41:30.954738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:30.949 [2024-07-23 18:41:30.954746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:30.949 [2024-07-23 18:41:30.954753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:30.949 [2024-07-23 18:41:30.954761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:30.949 [2024-07-23 18:41:30.954768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:30.949 [2024-07-23 18:41:30.954775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:30.949 [2024-07-23 18:41:30.954782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:30.950 [2024-07-23 18:41:30.954789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:30.950 [2024-07-23 18:41:30.954796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:30.950 [2024-07-23 18:41:30.954804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:30.950 [2024-07-23 18:41:30.954811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:30.950 [2024-07-23 18:41:30.954819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:30.950 [2024-07-23 18:41:30.954826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:30.950 [2024-07-23 18:41:30.954834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:30.950 [2024-07-23 18:41:30.954841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:30.950 [2024-07-23 18:41:30.954848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:30.950 [2024-07-23 18:41:30.954855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:30.950 [2024-07-23 18:41:30.954862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:30.950 [2024-07-23 18:41:30.954869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:30.950 [2024-07-23 18:41:30.954876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:30.950 [2024-07-23 18:41:30.954882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:30.950 [2024-07-23 18:41:30.954889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:30.950 [2024-07-23 18:41:30.954895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:30.950 [2024-07-23 18:41:30.954902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:30.950 [2024-07-23 18:41:30.954910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:30.950 [2024-07-23 18:41:30.954917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:30.950 [2024-07-23 18:41:30.954923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:30.950 [2024-07-23 18:41:30.954932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:30.950 [2024-07-23 18:41:30.954940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:30.950 [2024-07-23 18:41:30.954948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:30.950 [2024-07-23 18:41:30.954955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:30.950 [2024-07-23 18:41:30.954963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:30.950 [2024-07-23 18:41:30.954971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:30.950 [2024-07-23 18:41:30.954977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:30.950 [2024-07-23 18:41:30.954985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:30.950 [2024-07-23 18:41:30.954991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:30.950 [2024-07-23 18:41:30.954998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:30.950 [2024-07-23 18:41:30.955006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:30.950 [2024-07-23 18:41:30.955014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:30.950 [2024-07-23 18:41:30.955021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:30.950 [2024-07-23 18:41:30.955028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:30.950 [2024-07-23 18:41:30.955036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:30.950 [2024-07-23 18:41:30.955042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:30.950 [2024-07-23 18:41:30.955049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:30.950 [2024-07-23 18:41:30.955057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:30.950 [2024-07-23 18:41:30.955064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:30.950 [2024-07-23 18:41:30.955071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:30.950 [2024-07-23 18:41:30.955078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:30.950 [2024-07-23 18:41:30.955084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:30.950 [2024-07-23 18:41:30.955104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:30.950 [2024-07-23 18:41:30.955111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:30.950 [2024-07-23 18:41:30.955119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:30.950 [2024-07-23 18:41:30.955126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:30.950 [2024-07-23 18:41:30.955134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:30.950 [2024-07-23 18:41:30.955142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:30.950 [2024-07-23 18:41:30.955150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:30.950 [2024-07-23 18:41:30.955157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:30.950 [2024-07-23 18:41:30.955164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:30.950 [2024-07-23 18:41:30.955171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:30.950 [2024-07-23 18:41:30.955178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:30.950 [2024-07-23 18:41:30.955186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:30.950 [2024-07-23 18:41:30.955193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:30.950 [2024-07-23 18:41:30.955199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:30.950 [2024-07-23 18:41:30.955207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:30.950 [2024-07-23 18:41:30.955213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:30.950 [2024-07-23 18:41:30.955221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:30.950 [2024-07-23 18:41:30.955228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:30.950 [2024-07-23 18:41:30.955235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:30.950 [2024-07-23 18:41:30.955243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:30.950 [2024-07-23 18:41:30.955250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:30.950 [2024-07-23 18:41:30.955257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:30.950 [2024-07-23 18:41:30.955265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:30.950 [2024-07-23 18:41:30.955273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:30.950 [2024-07-23 18:41:30.955280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:30.950 [2024-07-23 18:41:30.955287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:30.950 [2024-07-23 18:41:30.955294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:30.950 [2024-07-23 18:41:30.955301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:30.950 [2024-07-23 18:41:30.955307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:30.950 [2024-07-23 18:41:30.955314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:30.950 [2024-07-23 18:41:30.955320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:30.950 [2024-07-23 18:41:30.955327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:30.950 [2024-07-23 18:41:30.955333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:30.950 [2024-07-23 18:41:30.955341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:30.950 [2024-07-23 18:41:30.955349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:30.950 [2024-07-23 18:41:30.955356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:30.950 [2024-07-23 18:41:30.955363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:30.950 [2024-07-23 18:41:30.955370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:30.950 [2024-07-23 18:41:30.955378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:30.950 [2024-07-23 18:41:30.955385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:30.950 [2024-07-23 18:41:30.955392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:30.950 [2024-07-23 18:41:30.955399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:30.950 [2024-07-23 18:41:30.955405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:30.950 [2024-07-23 18:41:30.955413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:30.950 [2024-07-23 18:41:30.955420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:30.950 [2024-07-23 18:41:30.955427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:30.950 [2024-07-23 18:41:30.955434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:30.951 [2024-07-23 18:41:30.955441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:30.951 [2024-07-23 18:41:30.955448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:30.951 [2024-07-23 18:41:30.955470] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:30.951 [2024-07-23 18:41:30.955501] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7b21931e-714e-41ab-a824-bda33f9430c9 00:23:30.951 [2024-07-23 18:41:30.955509] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 105728 00:23:30.951 [2024-07-23 18:41:30.955516] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 106688 00:23:30.951 [2024-07-23 18:41:30.955523] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 105728 00:23:30.951 [2024-07-23 18:41:30.955531] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0091 00:23:30.951 [2024-07-23 18:41:30.955538] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:30.951 [2024-07-23 18:41:30.955546] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:30.951 [2024-07-23 18:41:30.955553] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:30.951 [2024-07-23 18:41:30.955559] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:30.951 [2024-07-23 18:41:30.955570] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:30.951 [2024-07-23 18:41:30.955577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.951 [2024-07-23 18:41:30.955737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:30.951 [2024-07-23 18:41:30.955773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.889 ms 00:23:30.951 [2024-07-23 18:41:30.955793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.951 [2024-07-23 18:41:30.958751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.951 [2024-07-23 18:41:30.958803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:30.951 [2024-07-23 18:41:30.958841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.934 ms 00:23:30.951 [2024-07-23 18:41:30.958868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.951 [2024-07-23 18:41:30.959074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.951 [2024-07-23 18:41:30.959119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:30.951 [2024-07-23 18:41:30.959144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.179 ms 00:23:30.951 [2024-07-23 18:41:30.959161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.951 [2024-07-23 18:41:30.968329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:30.951 [2024-07-23 18:41:30.968398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:30.951 [2024-07-23 18:41:30.968427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:30.951 [2024-07-23 18:41:30.968452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.951 [2024-07-23 18:41:30.968515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:30.951 [2024-07-23 18:41:30.968535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:30.951 [2024-07-23 18:41:30.968554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:30.951 [2024-07-23 18:41:30.968584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.951 [2024-07-23 18:41:30.968649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:30.951 [2024-07-23 18:41:30.968686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:30.951 [2024-07-23 18:41:30.968713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:30.951 [2024-07-23 18:41:30.968750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.951 [2024-07-23 18:41:30.968778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:30.951 [2024-07-23 18:41:30.968803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:30.951 [2024-07-23 18:41:30.968832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:30.951 [2024-07-23 18:41:30.968857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.951 [2024-07-23 18:41:30.991016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:30.951 [2024-07-23 18:41:30.991105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:30.951 [2024-07-23 18:41:30.991150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:30.951 [2024-07-23 18:41:30.991169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.210 [2024-07-23 18:41:31.005130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:31.210 [2024-07-23 18:41:31.005225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:31.210 [2024-07-23 18:41:31.005257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:31.210 [2024-07-23 18:41:31.005277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.210 [2024-07-23 18:41:31.005337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:31.210 [2024-07-23 18:41:31.005359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:31.210 [2024-07-23 18:41:31.005400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:31.210 [2024-07-23 18:41:31.005418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.210 [2024-07-23 18:41:31.005470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:31.210 [2024-07-23 18:41:31.005499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:31.210 [2024-07-23 18:41:31.005536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:31.210 [2024-07-23 18:41:31.005565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.210 [2024-07-23 18:41:31.005738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:31.210 [2024-07-23 18:41:31.005783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:31.210 [2024-07-23 18:41:31.005815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:31.210 [2024-07-23 18:41:31.005844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.210 [2024-07-23 18:41:31.005901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:31.210 [2024-07-23 18:41:31.005944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:31.210 [2024-07-23 18:41:31.005955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:31.210 [2024-07-23 18:41:31.005967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.210 [2024-07-23 18:41:31.006010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:31.210 [2024-07-23 18:41:31.006019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:31.210 [2024-07-23 18:41:31.006027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:31.210 [2024-07-23 18:41:31.006035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.210 [2024-07-23 18:41:31.006081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:31.210 [2024-07-23 18:41:31.006090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:31.210 [2024-07-23 18:41:31.006102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:31.210 [2024-07-23 18:41:31.006109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.210 [2024-07-23 18:41:31.006260] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 208.338 ms, result 0 00:23:31.779 00:23:31.779 00:23:31.779 18:41:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:23:33.685 18:41:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:33.685 [2024-07-23 18:41:33.643402] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:23:33.685 [2024-07-23 18:41:33.643559] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93253 ] 00:23:33.945 [2024-07-23 18:41:33.794211] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:33.945 [2024-07-23 18:41:33.861903] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:34.206 [2024-07-23 18:41:34.013293] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:34.206 [2024-07-23 18:41:34.013392] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:34.206 [2024-07-23 18:41:34.164885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.206 [2024-07-23 18:41:34.164965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:34.206 [2024-07-23 18:41:34.164980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:34.206 [2024-07-23 18:41:34.164987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.206 [2024-07-23 18:41:34.165049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.206 [2024-07-23 18:41:34.165059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:34.206 [2024-07-23 18:41:34.165077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:23:34.206 [2024-07-23 18:41:34.165094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.206 [2024-07-23 18:41:34.165114] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:34.206 [2024-07-23 18:41:34.165329] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:34.206 [2024-07-23 18:41:34.165346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.206 [2024-07-23 18:41:34.165357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:34.206 [2024-07-23 18:41:34.165365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.238 ms 00:23:34.206 [2024-07-23 18:41:34.165372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.206 [2024-07-23 18:41:34.167829] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:34.206 [2024-07-23 18:41:34.171364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.206 [2024-07-23 18:41:34.171398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:34.206 [2024-07-23 18:41:34.171412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.554 ms 00:23:34.206 [2024-07-23 18:41:34.171419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.206 [2024-07-23 18:41:34.171483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.206 [2024-07-23 18:41:34.171493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:34.206 [2024-07-23 18:41:34.171512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:23:34.206 [2024-07-23 18:41:34.171520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.206 [2024-07-23 18:41:34.183842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.206 [2024-07-23 18:41:34.183870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:34.206 [2024-07-23 18:41:34.183884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.299 ms 00:23:34.206 [2024-07-23 18:41:34.183902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.206 [2024-07-23 18:41:34.183998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.206 [2024-07-23 18:41:34.184009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:34.206 [2024-07-23 18:41:34.184017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:23:34.206 [2024-07-23 18:41:34.184025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.206 [2024-07-23 18:41:34.184083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.206 [2024-07-23 18:41:34.184093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:34.206 [2024-07-23 18:41:34.184109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:34.206 [2024-07-23 18:41:34.184116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.206 [2024-07-23 18:41:34.184140] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:34.206 [2024-07-23 18:41:34.186829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.206 [2024-07-23 18:41:34.186854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:34.206 [2024-07-23 18:41:34.186863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.702 ms 00:23:34.206 [2024-07-23 18:41:34.186874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.206 [2024-07-23 18:41:34.186906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.206 [2024-07-23 18:41:34.186915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:34.206 [2024-07-23 18:41:34.186926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:34.206 [2024-07-23 18:41:34.186933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.206 [2024-07-23 18:41:34.186953] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:34.206 [2024-07-23 18:41:34.186978] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:34.206 [2024-07-23 18:41:34.187029] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:34.206 [2024-07-23 18:41:34.187052] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:23:34.206 [2024-07-23 18:41:34.187147] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:34.206 [2024-07-23 18:41:34.187163] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:34.206 [2024-07-23 18:41:34.187173] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:23:34.206 [2024-07-23 18:41:34.187191] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:34.206 [2024-07-23 18:41:34.187200] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:34.206 [2024-07-23 18:41:34.187208] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:34.206 [2024-07-23 18:41:34.187216] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:34.206 [2024-07-23 18:41:34.187232] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:34.206 [2024-07-23 18:41:34.187241] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:34.206 [2024-07-23 18:41:34.187260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.206 [2024-07-23 18:41:34.187267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:34.206 [2024-07-23 18:41:34.187274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:23:34.206 [2024-07-23 18:41:34.187285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.206 [2024-07-23 18:41:34.187372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.206 [2024-07-23 18:41:34.187382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:34.206 [2024-07-23 18:41:34.187398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:23:34.206 [2024-07-23 18:41:34.187408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.206 [2024-07-23 18:41:34.187502] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:34.206 [2024-07-23 18:41:34.187516] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:34.206 [2024-07-23 18:41:34.187526] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:34.206 [2024-07-23 18:41:34.187536] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:34.206 [2024-07-23 18:41:34.187548] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:34.206 [2024-07-23 18:41:34.187563] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:34.206 [2024-07-23 18:41:34.187587] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:34.206 [2024-07-23 18:41:34.187596] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:34.206 [2024-07-23 18:41:34.187604] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:34.206 [2024-07-23 18:41:34.187611] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:34.206 [2024-07-23 18:41:34.187617] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:34.206 [2024-07-23 18:41:34.187625] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:34.206 [2024-07-23 18:41:34.187631] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:34.207 [2024-07-23 18:41:34.187637] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:34.207 [2024-07-23 18:41:34.187644] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:34.207 [2024-07-23 18:41:34.187650] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:34.207 [2024-07-23 18:41:34.187657] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:34.207 [2024-07-23 18:41:34.187664] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:34.207 [2024-07-23 18:41:34.187671] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:34.207 [2024-07-23 18:41:34.187677] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:34.207 [2024-07-23 18:41:34.187684] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:34.207 [2024-07-23 18:41:34.187694] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:34.207 [2024-07-23 18:41:34.187700] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:34.207 [2024-07-23 18:41:34.187707] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:34.207 [2024-07-23 18:41:34.187712] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:34.207 [2024-07-23 18:41:34.187719] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:34.207 [2024-07-23 18:41:34.187725] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:34.207 [2024-07-23 18:41:34.187731] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:34.207 [2024-07-23 18:41:34.187737] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:34.207 [2024-07-23 18:41:34.187743] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:34.207 [2024-07-23 18:41:34.187750] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:34.207 [2024-07-23 18:41:34.187756] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:34.207 [2024-07-23 18:41:34.187762] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:34.207 [2024-07-23 18:41:34.187768] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:34.207 [2024-07-23 18:41:34.187774] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:34.207 [2024-07-23 18:41:34.187780] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:34.207 [2024-07-23 18:41:34.187787] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:34.207 [2024-07-23 18:41:34.187797] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:34.207 [2024-07-23 18:41:34.187803] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:34.207 [2024-07-23 18:41:34.187810] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:34.207 [2024-07-23 18:41:34.187817] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:34.207 [2024-07-23 18:41:34.187824] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:34.207 [2024-07-23 18:41:34.187831] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:34.207 [2024-07-23 18:41:34.187837] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:34.207 [2024-07-23 18:41:34.187845] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:34.207 [2024-07-23 18:41:34.187865] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:34.207 [2024-07-23 18:41:34.187872] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:34.207 [2024-07-23 18:41:34.187880] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:34.207 [2024-07-23 18:41:34.187903] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:34.207 [2024-07-23 18:41:34.187910] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:34.207 [2024-07-23 18:41:34.187917] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:34.207 [2024-07-23 18:41:34.187924] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:34.207 [2024-07-23 18:41:34.187931] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:34.207 [2024-07-23 18:41:34.187942] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:34.207 [2024-07-23 18:41:34.187952] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:34.207 [2024-07-23 18:41:34.187960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:34.207 [2024-07-23 18:41:34.187968] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:34.207 [2024-07-23 18:41:34.187975] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:34.207 [2024-07-23 18:41:34.187984] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:34.207 [2024-07-23 18:41:34.187991] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:34.207 [2024-07-23 18:41:34.187999] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:34.207 [2024-07-23 18:41:34.188006] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:34.207 [2024-07-23 18:41:34.188013] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:34.207 [2024-07-23 18:41:34.188021] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:34.207 [2024-07-23 18:41:34.188028] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:34.207 [2024-07-23 18:41:34.188036] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:34.207 [2024-07-23 18:41:34.188043] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:34.207 [2024-07-23 18:41:34.188051] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:34.207 [2024-07-23 18:41:34.188058] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:34.207 [2024-07-23 18:41:34.188068] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:34.207 [2024-07-23 18:41:34.188077] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:34.207 [2024-07-23 18:41:34.188085] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:34.207 [2024-07-23 18:41:34.188093] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:34.207 [2024-07-23 18:41:34.188112] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:34.207 [2024-07-23 18:41:34.188120] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:34.207 [2024-07-23 18:41:34.188129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.207 [2024-07-23 18:41:34.188137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:34.207 [2024-07-23 18:41:34.188144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.687 ms 00:23:34.207 [2024-07-23 18:41:34.188155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.207 [2024-07-23 18:41:34.220845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.207 [2024-07-23 18:41:34.220954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:34.207 [2024-07-23 18:41:34.221039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.668 ms 00:23:34.207 [2024-07-23 18:41:34.221070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.207 [2024-07-23 18:41:34.221380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.207 [2024-07-23 18:41:34.221412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:34.207 [2024-07-23 18:41:34.221444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.217 ms 00:23:34.207 [2024-07-23 18:41:34.221472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.207 [2024-07-23 18:41:34.243486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.207 [2024-07-23 18:41:34.243537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:34.207 [2024-07-23 18:41:34.243555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.803 ms 00:23:34.207 [2024-07-23 18:41:34.243588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.207 [2024-07-23 18:41:34.243659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.207 [2024-07-23 18:41:34.243674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:34.207 [2024-07-23 18:41:34.243688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:34.207 [2024-07-23 18:41:34.243707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.207 [2024-07-23 18:41:34.244612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.207 [2024-07-23 18:41:34.244644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:34.207 [2024-07-23 18:41:34.244659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.835 ms 00:23:34.207 [2024-07-23 18:41:34.244670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.207 [2024-07-23 18:41:34.244871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.207 [2024-07-23 18:41:34.244891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:34.207 [2024-07-23 18:41:34.244906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.168 ms 00:23:34.207 [2024-07-23 18:41:34.244919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.207 [2024-07-23 18:41:34.255783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.207 [2024-07-23 18:41:34.255825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:34.207 [2024-07-23 18:41:34.255842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.846 ms 00:23:34.207 [2024-07-23 18:41:34.255852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.468 [2024-07-23 18:41:34.259694] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:23:34.468 [2024-07-23 18:41:34.259729] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:34.468 [2024-07-23 18:41:34.259748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.468 [2024-07-23 18:41:34.259758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:34.468 [2024-07-23 18:41:34.259768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.774 ms 00:23:34.468 [2024-07-23 18:41:34.259777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.468 [2024-07-23 18:41:34.272393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.468 [2024-07-23 18:41:34.272457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:34.468 [2024-07-23 18:41:34.272470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.597 ms 00:23:34.468 [2024-07-23 18:41:34.272478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.468 [2024-07-23 18:41:34.274286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.468 [2024-07-23 18:41:34.274315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:34.468 [2024-07-23 18:41:34.274326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.766 ms 00:23:34.468 [2024-07-23 18:41:34.274333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.468 [2024-07-23 18:41:34.275860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.468 [2024-07-23 18:41:34.275889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:34.468 [2024-07-23 18:41:34.275898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.490 ms 00:23:34.468 [2024-07-23 18:41:34.275905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.468 [2024-07-23 18:41:34.276190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.468 [2024-07-23 18:41:34.276208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:34.468 [2024-07-23 18:41:34.276217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.222 ms 00:23:34.468 [2024-07-23 18:41:34.276224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.468 [2024-07-23 18:41:34.305109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.468 [2024-07-23 18:41:34.305185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:34.468 [2024-07-23 18:41:34.305201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.914 ms 00:23:34.468 [2024-07-23 18:41:34.305210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.468 [2024-07-23 18:41:34.311341] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:34.468 [2024-07-23 18:41:34.315720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.468 [2024-07-23 18:41:34.315749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:34.468 [2024-07-23 18:41:34.315761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.477 ms 00:23:34.468 [2024-07-23 18:41:34.315769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.468 [2024-07-23 18:41:34.315855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.468 [2024-07-23 18:41:34.315865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:34.468 [2024-07-23 18:41:34.315874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:34.468 [2024-07-23 18:41:34.315881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.468 [2024-07-23 18:41:34.318125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.468 [2024-07-23 18:41:34.318162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:34.468 [2024-07-23 18:41:34.318176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.214 ms 00:23:34.468 [2024-07-23 18:41:34.318183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.468 [2024-07-23 18:41:34.318212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.468 [2024-07-23 18:41:34.318233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:34.469 [2024-07-23 18:41:34.318241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:34.469 [2024-07-23 18:41:34.318255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.469 [2024-07-23 18:41:34.318305] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:34.469 [2024-07-23 18:41:34.318322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.469 [2024-07-23 18:41:34.318333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:34.469 [2024-07-23 18:41:34.318343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:23:34.469 [2024-07-23 18:41:34.318361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.469 [2024-07-23 18:41:34.323133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.469 [2024-07-23 18:41:34.323164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:34.469 [2024-07-23 18:41:34.323186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.760 ms 00:23:34.469 [2024-07-23 18:41:34.323194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.469 [2024-07-23 18:41:34.323261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.469 [2024-07-23 18:41:34.323270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:34.469 [2024-07-23 18:41:34.323286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:23:34.469 [2024-07-23 18:41:34.323298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.469 [2024-07-23 18:41:34.328865] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 163.202 ms, result 0 00:24:04.243  Copying: 1112/1048576 [kB] (1112 kBps) Copying: 6432/1048576 [kB] (5320 kBps) Copying: 43/1024 [MB] (36 MBps) Copying: 79/1024 [MB] (36 MBps) Copying: 116/1024 [MB] (36 MBps) Copying: 153/1024 [MB] (36 MBps) Copying: 191/1024 [MB] (37 MBps) Copying: 227/1024 [MB] (35 MBps) Copying: 264/1024 [MB] (37 MBps) Copying: 300/1024 [MB] (36 MBps) Copying: 337/1024 [MB] (36 MBps) Copying: 374/1024 [MB] (36 MBps) Copying: 409/1024 [MB] (34 MBps) Copying: 444/1024 [MB] (35 MBps) Copying: 481/1024 [MB] (36 MBps) Copying: 518/1024 [MB] (36 MBps) Copying: 555/1024 [MB] (37 MBps) Copying: 592/1024 [MB] (37 MBps) Copying: 631/1024 [MB] (38 MBps) Copying: 668/1024 [MB] (37 MBps) Copying: 706/1024 [MB] (37 MBps) Copying: 745/1024 [MB] (38 MBps) Copying: 784/1024 [MB] (39 MBps) Copying: 822/1024 [MB] (38 MBps) Copying: 859/1024 [MB] (36 MBps) Copying: 896/1024 [MB] (37 MBps) Copying: 934/1024 [MB] (37 MBps) Copying: 971/1024 [MB] (37 MBps) Copying: 1009/1024 [MB] (37 MBps) Copying: 1024/1024 [MB] (average 34 MBps)[2024-07-23 18:42:04.260190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.243 [2024-07-23 18:42:04.260380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:04.243 [2024-07-23 18:42:04.260401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:04.243 [2024-07-23 18:42:04.260412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.243 [2024-07-23 18:42:04.260444] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:04.243 [2024-07-23 18:42:04.261930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.243 [2024-07-23 18:42:04.261953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:04.243 [2024-07-23 18:42:04.261965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.469 ms 00:24:04.243 [2024-07-23 18:42:04.261974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.243 [2024-07-23 18:42:04.262204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.243 [2024-07-23 18:42:04.262216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:04.243 [2024-07-23 18:42:04.262227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.207 ms 00:24:04.243 [2024-07-23 18:42:04.262236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.243 [2024-07-23 18:42:04.274744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.243 [2024-07-23 18:42:04.274802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:04.243 [2024-07-23 18:42:04.274827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.510 ms 00:24:04.243 [2024-07-23 18:42:04.274837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.243 [2024-07-23 18:42:04.280598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.243 [2024-07-23 18:42:04.280633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:04.243 [2024-07-23 18:42:04.280644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.730 ms 00:24:04.243 [2024-07-23 18:42:04.280652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.243 [2024-07-23 18:42:04.282596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.243 [2024-07-23 18:42:04.282631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:04.243 [2024-07-23 18:42:04.282641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.903 ms 00:24:04.243 [2024-07-23 18:42:04.282649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.243 [2024-07-23 18:42:04.286890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.243 [2024-07-23 18:42:04.286936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:04.243 [2024-07-23 18:42:04.286963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.222 ms 00:24:04.243 [2024-07-23 18:42:04.286971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.243 [2024-07-23 18:42:04.290354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.243 [2024-07-23 18:42:04.290392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:04.243 [2024-07-23 18:42:04.290403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.334 ms 00:24:04.243 [2024-07-23 18:42:04.290412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.243 [2024-07-23 18:42:04.292436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.243 [2024-07-23 18:42:04.292471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:24:04.243 [2024-07-23 18:42:04.292481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.011 ms 00:24:04.243 [2024-07-23 18:42:04.292488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.243 [2024-07-23 18:42:04.293980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.243 [2024-07-23 18:42:04.294010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:24:04.243 [2024-07-23 18:42:04.294019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.470 ms 00:24:04.243 [2024-07-23 18:42:04.294026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.504 [2024-07-23 18:42:04.295116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.504 [2024-07-23 18:42:04.295147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:04.504 [2024-07-23 18:42:04.295156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.068 ms 00:24:04.504 [2024-07-23 18:42:04.295163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.504 [2024-07-23 18:42:04.296171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.504 [2024-07-23 18:42:04.296205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:04.504 [2024-07-23 18:42:04.296215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.962 ms 00:24:04.504 [2024-07-23 18:42:04.296222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.504 [2024-07-23 18:42:04.296245] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:04.504 [2024-07-23 18:42:04.296260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:24:04.504 [2024-07-23 18:42:04.296271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3328 / 261120 wr_cnt: 1 state: open 00:24:04.504 [2024-07-23 18:42:04.296280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:04.504 [2024-07-23 18:42:04.296288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:04.504 [2024-07-23 18:42:04.296296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:04.504 [2024-07-23 18:42:04.296304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:04.504 [2024-07-23 18:42:04.296312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:04.504 [2024-07-23 18:42:04.296320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:04.504 [2024-07-23 18:42:04.296328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:04.504 [2024-07-23 18:42:04.296336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:04.504 [2024-07-23 18:42:04.296343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:04.504 [2024-07-23 18:42:04.296352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:04.504 [2024-07-23 18:42:04.296360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:04.504 [2024-07-23 18:42:04.296367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:04.504 [2024-07-23 18:42:04.296375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:04.504 [2024-07-23 18:42:04.296382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:04.504 [2024-07-23 18:42:04.296390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:04.504 [2024-07-23 18:42:04.296398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:04.504 [2024-07-23 18:42:04.296406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:04.504 [2024-07-23 18:42:04.296413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:04.504 [2024-07-23 18:42:04.296421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:04.504 [2024-07-23 18:42:04.296428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:04.504 [2024-07-23 18:42:04.296435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:04.504 [2024-07-23 18:42:04.296443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:04.504 [2024-07-23 18:42:04.296450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:04.504 [2024-07-23 18:42:04.296458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:04.504 [2024-07-23 18:42:04.296468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:04.504 [2024-07-23 18:42:04.296477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:04.504 [2024-07-23 18:42:04.296486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:04.504 [2024-07-23 18:42:04.296494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:04.504 [2024-07-23 18:42:04.296502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:04.504 [2024-07-23 18:42:04.296510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:04.504 [2024-07-23 18:42:04.296518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:04.504 [2024-07-23 18:42:04.296529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:04.504 [2024-07-23 18:42:04.296536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:04.504 [2024-07-23 18:42:04.296544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:04.504 [2024-07-23 18:42:04.296551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:04.504 [2024-07-23 18:42:04.296559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:04.504 [2024-07-23 18:42:04.296579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:04.504 [2024-07-23 18:42:04.296588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:04.504 [2024-07-23 18:42:04.296596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:04.504 [2024-07-23 18:42:04.296604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:04.504 [2024-07-23 18:42:04.296612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:04.504 [2024-07-23 18:42:04.296619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:04.504 [2024-07-23 18:42:04.296627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:04.504 [2024-07-23 18:42:04.296635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:04.504 [2024-07-23 18:42:04.296643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:04.504 [2024-07-23 18:42:04.296652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:04.504 [2024-07-23 18:42:04.296660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:04.504 [2024-07-23 18:42:04.296668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:04.504 [2024-07-23 18:42:04.296675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:04.504 [2024-07-23 18:42:04.296683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:04.504 [2024-07-23 18:42:04.296691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:04.504 [2024-07-23 18:42:04.296725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:04.504 [2024-07-23 18:42:04.296732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:04.504 [2024-07-23 18:42:04.296741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:04.504 [2024-07-23 18:42:04.296768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:04.504 [2024-07-23 18:42:04.296776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:04.504 [2024-07-23 18:42:04.296785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:04.504 [2024-07-23 18:42:04.296795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:04.504 [2024-07-23 18:42:04.296804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:04.504 [2024-07-23 18:42:04.296813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:04.504 [2024-07-23 18:42:04.296821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:04.504 [2024-07-23 18:42:04.296829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:04.504 [2024-07-23 18:42:04.296837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:04.505 [2024-07-23 18:42:04.296846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:04.505 [2024-07-23 18:42:04.296854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:04.505 [2024-07-23 18:42:04.296861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:04.505 [2024-07-23 18:42:04.296877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:04.505 [2024-07-23 18:42:04.296884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:04.505 [2024-07-23 18:42:04.296892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:04.505 [2024-07-23 18:42:04.296900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:04.505 [2024-07-23 18:42:04.296907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:04.505 [2024-07-23 18:42:04.296915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:04.505 [2024-07-23 18:42:04.296923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:04.505 [2024-07-23 18:42:04.296931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:04.505 [2024-07-23 18:42:04.296938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:04.505 [2024-07-23 18:42:04.296946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:04.505 [2024-07-23 18:42:04.296954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:04.505 [2024-07-23 18:42:04.296974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:04.505 [2024-07-23 18:42:04.296981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:04.505 [2024-07-23 18:42:04.296988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:04.505 [2024-07-23 18:42:04.296995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:04.505 [2024-07-23 18:42:04.297003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:04.505 [2024-07-23 18:42:04.297011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:04.505 [2024-07-23 18:42:04.297019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:04.505 [2024-07-23 18:42:04.297027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:04.505 [2024-07-23 18:42:04.297034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:04.505 [2024-07-23 18:42:04.297042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:04.505 [2024-07-23 18:42:04.297049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:04.505 [2024-07-23 18:42:04.297057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:04.505 [2024-07-23 18:42:04.297064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:04.505 [2024-07-23 18:42:04.297072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:04.505 [2024-07-23 18:42:04.297080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:04.505 [2024-07-23 18:42:04.297087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:04.505 [2024-07-23 18:42:04.297094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:04.505 [2024-07-23 18:42:04.297102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:04.505 [2024-07-23 18:42:04.297110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:04.505 [2024-07-23 18:42:04.297117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:04.505 [2024-07-23 18:42:04.297126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:04.505 [2024-07-23 18:42:04.297141] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:04.505 [2024-07-23 18:42:04.297154] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7b21931e-714e-41ab-a824-bda33f9430c9 00:24:04.505 [2024-07-23 18:42:04.297162] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264448 00:24:04.505 [2024-07-23 18:42:04.297169] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 160704 00:24:04.505 [2024-07-23 18:42:04.297176] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 158720 00:24:04.505 [2024-07-23 18:42:04.297184] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0125 00:24:04.505 [2024-07-23 18:42:04.297192] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:04.505 [2024-07-23 18:42:04.297200] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:04.505 [2024-07-23 18:42:04.297208] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:04.505 [2024-07-23 18:42:04.297215] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:04.505 [2024-07-23 18:42:04.297221] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:04.505 [2024-07-23 18:42:04.297229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.505 [2024-07-23 18:42:04.297238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:04.505 [2024-07-23 18:42:04.297246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.987 ms 00:24:04.505 [2024-07-23 18:42:04.297255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.505 [2024-07-23 18:42:04.300103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.505 [2024-07-23 18:42:04.300124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:04.505 [2024-07-23 18:42:04.300133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.833 ms 00:24:04.505 [2024-07-23 18:42:04.300141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.505 [2024-07-23 18:42:04.300316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.505 [2024-07-23 18:42:04.300334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:04.505 [2024-07-23 18:42:04.300342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.156 ms 00:24:04.505 [2024-07-23 18:42:04.300350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.505 [2024-07-23 18:42:04.309643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:04.505 [2024-07-23 18:42:04.309668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:04.505 [2024-07-23 18:42:04.309678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:04.505 [2024-07-23 18:42:04.309686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.505 [2024-07-23 18:42:04.309731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:04.505 [2024-07-23 18:42:04.309744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:04.505 [2024-07-23 18:42:04.309753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:04.505 [2024-07-23 18:42:04.309770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.505 [2024-07-23 18:42:04.309835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:04.505 [2024-07-23 18:42:04.309846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:04.505 [2024-07-23 18:42:04.309854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:04.505 [2024-07-23 18:42:04.309861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.505 [2024-07-23 18:42:04.309878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:04.505 [2024-07-23 18:42:04.309887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:04.505 [2024-07-23 18:42:04.309899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:04.505 [2024-07-23 18:42:04.309907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.505 [2024-07-23 18:42:04.333049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:04.505 [2024-07-23 18:42:04.333085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:04.505 [2024-07-23 18:42:04.333096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:04.505 [2024-07-23 18:42:04.333105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.505 [2024-07-23 18:42:04.346877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:04.505 [2024-07-23 18:42:04.346907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:04.505 [2024-07-23 18:42:04.346934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:04.505 [2024-07-23 18:42:04.346943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.505 [2024-07-23 18:42:04.346999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:04.505 [2024-07-23 18:42:04.347008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:04.505 [2024-07-23 18:42:04.347017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:04.505 [2024-07-23 18:42:04.347024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.505 [2024-07-23 18:42:04.347058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:04.505 [2024-07-23 18:42:04.347068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:04.505 [2024-07-23 18:42:04.347076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:04.505 [2024-07-23 18:42:04.347083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.505 [2024-07-23 18:42:04.347166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:04.505 [2024-07-23 18:42:04.347176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:04.505 [2024-07-23 18:42:04.347185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:04.505 [2024-07-23 18:42:04.347192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.505 [2024-07-23 18:42:04.347230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:04.505 [2024-07-23 18:42:04.347239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:04.505 [2024-07-23 18:42:04.347247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:04.505 [2024-07-23 18:42:04.347255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.505 [2024-07-23 18:42:04.347309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:04.505 [2024-07-23 18:42:04.347319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:04.506 [2024-07-23 18:42:04.347327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:04.506 [2024-07-23 18:42:04.347335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.506 [2024-07-23 18:42:04.347381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:04.506 [2024-07-23 18:42:04.347398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:04.506 [2024-07-23 18:42:04.347407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:04.506 [2024-07-23 18:42:04.347414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.506 [2024-07-23 18:42:04.347558] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 87.510 ms, result 0 00:24:04.765 00:24:04.765 00:24:04.765 18:42:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:06.693 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:24:06.693 18:42:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:06.693 [2024-07-23 18:42:06.598955] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:24:06.693 [2024-07-23 18:42:06.599186] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93591 ] 00:24:06.953 [2024-07-23 18:42:06.757343] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:06.953 [2024-07-23 18:42:06.826642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:06.953 [2024-07-23 18:42:06.977334] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:06.953 [2024-07-23 18:42:06.977414] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:07.213 [2024-07-23 18:42:07.129005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.213 [2024-07-23 18:42:07.129065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:07.213 [2024-07-23 18:42:07.129086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:07.213 [2024-07-23 18:42:07.129095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.213 [2024-07-23 18:42:07.129151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.213 [2024-07-23 18:42:07.129171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:07.213 [2024-07-23 18:42:07.129179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:24:07.213 [2024-07-23 18:42:07.129189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.213 [2024-07-23 18:42:07.129207] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:07.213 [2024-07-23 18:42:07.129417] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:07.213 [2024-07-23 18:42:07.129434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.213 [2024-07-23 18:42:07.129449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:07.213 [2024-07-23 18:42:07.129457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.233 ms 00:24:07.213 [2024-07-23 18:42:07.129471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.213 [2024-07-23 18:42:07.131853] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:07.213 [2024-07-23 18:42:07.135446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.213 [2024-07-23 18:42:07.135487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:07.213 [2024-07-23 18:42:07.135504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.602 ms 00:24:07.213 [2024-07-23 18:42:07.135511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.213 [2024-07-23 18:42:07.135594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.213 [2024-07-23 18:42:07.135614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:07.213 [2024-07-23 18:42:07.135626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:24:07.213 [2024-07-23 18:42:07.135633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.213 [2024-07-23 18:42:07.147976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.213 [2024-07-23 18:42:07.148009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:07.213 [2024-07-23 18:42:07.148019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.326 ms 00:24:07.213 [2024-07-23 18:42:07.148026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.213 [2024-07-23 18:42:07.148122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.213 [2024-07-23 18:42:07.148135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:07.213 [2024-07-23 18:42:07.148143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:24:07.213 [2024-07-23 18:42:07.148150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.213 [2024-07-23 18:42:07.148208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.213 [2024-07-23 18:42:07.148218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:07.213 [2024-07-23 18:42:07.148235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:24:07.213 [2024-07-23 18:42:07.148242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.213 [2024-07-23 18:42:07.148266] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:07.213 [2024-07-23 18:42:07.150951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.213 [2024-07-23 18:42:07.150974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:07.213 [2024-07-23 18:42:07.150984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.697 ms 00:24:07.213 [2024-07-23 18:42:07.150991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.213 [2024-07-23 18:42:07.151022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.213 [2024-07-23 18:42:07.151031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:07.213 [2024-07-23 18:42:07.151042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:07.213 [2024-07-23 18:42:07.151048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.213 [2024-07-23 18:42:07.151070] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:07.213 [2024-07-23 18:42:07.151095] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:07.213 [2024-07-23 18:42:07.151136] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:07.213 [2024-07-23 18:42:07.151153] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:24:07.213 [2024-07-23 18:42:07.151239] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:07.213 [2024-07-23 18:42:07.151256] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:07.213 [2024-07-23 18:42:07.151266] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:24:07.213 [2024-07-23 18:42:07.151276] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:07.213 [2024-07-23 18:42:07.151287] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:07.213 [2024-07-23 18:42:07.151303] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:07.213 [2024-07-23 18:42:07.151329] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:07.213 [2024-07-23 18:42:07.151337] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:07.213 [2024-07-23 18:42:07.151344] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:07.214 [2024-07-23 18:42:07.151353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.214 [2024-07-23 18:42:07.151360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:07.214 [2024-07-23 18:42:07.151369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.286 ms 00:24:07.214 [2024-07-23 18:42:07.151380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.214 [2024-07-23 18:42:07.151449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.214 [2024-07-23 18:42:07.151458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:07.214 [2024-07-23 18:42:07.151467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:24:07.214 [2024-07-23 18:42:07.151480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.214 [2024-07-23 18:42:07.151564] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:07.214 [2024-07-23 18:42:07.151701] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:07.214 [2024-07-23 18:42:07.151724] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:07.214 [2024-07-23 18:42:07.151743] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:07.214 [2024-07-23 18:42:07.151775] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:07.214 [2024-07-23 18:42:07.151795] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:07.214 [2024-07-23 18:42:07.151832] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:07.214 [2024-07-23 18:42:07.151870] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:07.214 [2024-07-23 18:42:07.151890] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:07.214 [2024-07-23 18:42:07.151918] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:07.214 [2024-07-23 18:42:07.151946] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:07.214 [2024-07-23 18:42:07.151975] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:07.214 [2024-07-23 18:42:07.152004] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:07.214 [2024-07-23 18:42:07.152033] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:07.214 [2024-07-23 18:42:07.152061] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:07.214 [2024-07-23 18:42:07.152086] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:07.214 [2024-07-23 18:42:07.152118] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:07.214 [2024-07-23 18:42:07.152149] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:07.214 [2024-07-23 18:42:07.152175] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:07.214 [2024-07-23 18:42:07.152194] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:07.214 [2024-07-23 18:42:07.152242] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:07.214 [2024-07-23 18:42:07.152271] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:07.214 [2024-07-23 18:42:07.152294] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:07.214 [2024-07-23 18:42:07.152319] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:07.214 [2024-07-23 18:42:07.152347] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:07.214 [2024-07-23 18:42:07.152373] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:07.214 [2024-07-23 18:42:07.152402] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:07.214 [2024-07-23 18:42:07.152439] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:07.214 [2024-07-23 18:42:07.152464] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:07.214 [2024-07-23 18:42:07.152490] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:07.214 [2024-07-23 18:42:07.152517] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:07.214 [2024-07-23 18:42:07.152543] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:07.214 [2024-07-23 18:42:07.152588] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:07.214 [2024-07-23 18:42:07.152597] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:07.214 [2024-07-23 18:42:07.152604] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:07.214 [2024-07-23 18:42:07.152610] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:07.214 [2024-07-23 18:42:07.152617] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:07.214 [2024-07-23 18:42:07.152624] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:07.214 [2024-07-23 18:42:07.152630] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:07.214 [2024-07-23 18:42:07.152637] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:07.214 [2024-07-23 18:42:07.152643] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:07.214 [2024-07-23 18:42:07.152650] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:07.214 [2024-07-23 18:42:07.152657] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:07.214 [2024-07-23 18:42:07.152670] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:07.214 [2024-07-23 18:42:07.152677] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:07.214 [2024-07-23 18:42:07.152695] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:07.214 [2024-07-23 18:42:07.152703] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:07.214 [2024-07-23 18:42:07.152710] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:07.214 [2024-07-23 18:42:07.152717] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:07.214 [2024-07-23 18:42:07.152724] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:07.214 [2024-07-23 18:42:07.152730] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:07.214 [2024-07-23 18:42:07.152737] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:07.214 [2024-07-23 18:42:07.152743] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:07.214 [2024-07-23 18:42:07.152752] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:07.214 [2024-07-23 18:42:07.152761] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:07.214 [2024-07-23 18:42:07.152770] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:07.214 [2024-07-23 18:42:07.152779] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:07.214 [2024-07-23 18:42:07.152786] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:07.214 [2024-07-23 18:42:07.152792] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:07.214 [2024-07-23 18:42:07.152802] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:07.214 [2024-07-23 18:42:07.152810] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:07.214 [2024-07-23 18:42:07.152817] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:07.214 [2024-07-23 18:42:07.152824] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:07.214 [2024-07-23 18:42:07.152831] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:07.214 [2024-07-23 18:42:07.152849] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:07.214 [2024-07-23 18:42:07.152857] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:07.214 [2024-07-23 18:42:07.152863] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:07.214 [2024-07-23 18:42:07.152870] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:07.214 [2024-07-23 18:42:07.152877] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:07.214 [2024-07-23 18:42:07.152884] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:07.214 [2024-07-23 18:42:07.152891] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:07.214 [2024-07-23 18:42:07.152898] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:07.214 [2024-07-23 18:42:07.152905] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:07.214 [2024-07-23 18:42:07.152921] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:07.214 [2024-07-23 18:42:07.152930] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:07.214 [2024-07-23 18:42:07.152942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.214 [2024-07-23 18:42:07.152951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:07.214 [2024-07-23 18:42:07.152966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.428 ms 00:24:07.214 [2024-07-23 18:42:07.152977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.214 [2024-07-23 18:42:07.182031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.214 [2024-07-23 18:42:07.182073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:07.214 [2024-07-23 18:42:07.182093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.043 ms 00:24:07.214 [2024-07-23 18:42:07.182103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.214 [2024-07-23 18:42:07.182203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.214 [2024-07-23 18:42:07.182213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:07.214 [2024-07-23 18:42:07.182223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:24:07.214 [2024-07-23 18:42:07.182232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.214 [2024-07-23 18:42:07.198196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.214 [2024-07-23 18:42:07.198241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:07.214 [2024-07-23 18:42:07.198251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.927 ms 00:24:07.215 [2024-07-23 18:42:07.198258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.215 [2024-07-23 18:42:07.198299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.215 [2024-07-23 18:42:07.198308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:07.215 [2024-07-23 18:42:07.198316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:07.215 [2024-07-23 18:42:07.198328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.215 [2024-07-23 18:42:07.199142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.215 [2024-07-23 18:42:07.199179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:07.215 [2024-07-23 18:42:07.199188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.769 ms 00:24:07.215 [2024-07-23 18:42:07.199196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.215 [2024-07-23 18:42:07.199320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.215 [2024-07-23 18:42:07.199336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:07.215 [2024-07-23 18:42:07.199345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:24:07.215 [2024-07-23 18:42:07.199352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.215 [2024-07-23 18:42:07.209133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.215 [2024-07-23 18:42:07.209164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:07.215 [2024-07-23 18:42:07.209184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.774 ms 00:24:07.215 [2024-07-23 18:42:07.209192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.215 [2024-07-23 18:42:07.212785] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:07.215 [2024-07-23 18:42:07.212823] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:07.215 [2024-07-23 18:42:07.212839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.215 [2024-07-23 18:42:07.212848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:07.215 [2024-07-23 18:42:07.212857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.547 ms 00:24:07.215 [2024-07-23 18:42:07.212864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.215 [2024-07-23 18:42:07.225197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.215 [2024-07-23 18:42:07.225232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:07.215 [2024-07-23 18:42:07.225249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.317 ms 00:24:07.215 [2024-07-23 18:42:07.225256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.215 [2024-07-23 18:42:07.227051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.215 [2024-07-23 18:42:07.227081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:07.215 [2024-07-23 18:42:07.227090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.757 ms 00:24:07.215 [2024-07-23 18:42:07.227097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.215 [2024-07-23 18:42:07.228562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.215 [2024-07-23 18:42:07.228605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:07.215 [2024-07-23 18:42:07.228614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.434 ms 00:24:07.215 [2024-07-23 18:42:07.228622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.215 [2024-07-23 18:42:07.228922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.215 [2024-07-23 18:42:07.228944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:07.215 [2024-07-23 18:42:07.228954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.229 ms 00:24:07.215 [2024-07-23 18:42:07.228973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.215 [2024-07-23 18:42:07.258761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.215 [2024-07-23 18:42:07.258843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:07.215 [2024-07-23 18:42:07.258861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.817 ms 00:24:07.215 [2024-07-23 18:42:07.258884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.474 [2024-07-23 18:42:07.265374] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:07.474 [2024-07-23 18:42:07.269939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.474 [2024-07-23 18:42:07.269979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:07.474 [2024-07-23 18:42:07.269999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.018 ms 00:24:07.474 [2024-07-23 18:42:07.270007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.474 [2024-07-23 18:42:07.270101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.475 [2024-07-23 18:42:07.270112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:07.475 [2024-07-23 18:42:07.270122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:07.475 [2024-07-23 18:42:07.270130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.475 [2024-07-23 18:42:07.271502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.475 [2024-07-23 18:42:07.271531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:07.475 [2024-07-23 18:42:07.271545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.343 ms 00:24:07.475 [2024-07-23 18:42:07.271553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.475 [2024-07-23 18:42:07.271604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.475 [2024-07-23 18:42:07.271613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:07.475 [2024-07-23 18:42:07.271621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:07.475 [2024-07-23 18:42:07.271628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.475 [2024-07-23 18:42:07.271669] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:07.475 [2024-07-23 18:42:07.271679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.475 [2024-07-23 18:42:07.271688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:07.475 [2024-07-23 18:42:07.271711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:24:07.475 [2024-07-23 18:42:07.271720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.475 [2024-07-23 18:42:07.276379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.475 [2024-07-23 18:42:07.276413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:07.475 [2024-07-23 18:42:07.276423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.649 ms 00:24:07.475 [2024-07-23 18:42:07.276432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.475 [2024-07-23 18:42:07.276499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.475 [2024-07-23 18:42:07.276509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:07.475 [2024-07-23 18:42:07.276518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:24:07.475 [2024-07-23 18:42:07.276533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.475 [2024-07-23 18:42:07.278024] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 148.781 ms, result 0 00:24:38.631  Copying: 31/1024 [MB] (31 MBps) Copying: 63/1024 [MB] (32 MBps) Copying: 95/1024 [MB] (31 MBps) Copying: 127/1024 [MB] (32 MBps) Copying: 159/1024 [MB] (31 MBps) Copying: 192/1024 [MB] (32 MBps) Copying: 225/1024 [MB] (33 MBps) Copying: 259/1024 [MB] (33 MBps) Copying: 292/1024 [MB] (33 MBps) Copying: 324/1024 [MB] (32 MBps) Copying: 358/1024 [MB] (33 MBps) Copying: 392/1024 [MB] (33 MBps) Copying: 426/1024 [MB] (33 MBps) Copying: 460/1024 [MB] (34 MBps) Copying: 493/1024 [MB] (33 MBps) Copying: 528/1024 [MB] (34 MBps) Copying: 562/1024 [MB] (33 MBps) Copying: 596/1024 [MB] (34 MBps) Copying: 630/1024 [MB] (33 MBps) Copying: 662/1024 [MB] (32 MBps) Copying: 696/1024 [MB] (33 MBps) Copying: 729/1024 [MB] (33 MBps) Copying: 762/1024 [MB] (32 MBps) Copying: 795/1024 [MB] (33 MBps) Copying: 828/1024 [MB] (33 MBps) Copying: 862/1024 [MB] (33 MBps) Copying: 894/1024 [MB] (32 MBps) Copying: 927/1024 [MB] (33 MBps) Copying: 960/1024 [MB] (33 MBps) Copying: 994/1024 [MB] (33 MBps) Copying: 1024/1024 [MB] (average 33 MBps)[2024-07-23 18:42:38.444586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:38.631 [2024-07-23 18:42:38.444678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:38.631 [2024-07-23 18:42:38.444880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:24:38.631 [2024-07-23 18:42:38.444902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.631 [2024-07-23 18:42:38.444954] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:38.631 [2024-07-23 18:42:38.446388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:38.631 [2024-07-23 18:42:38.446408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:38.631 [2024-07-23 18:42:38.446421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.411 ms 00:24:38.631 [2024-07-23 18:42:38.446446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.631 [2024-07-23 18:42:38.446764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:38.631 [2024-07-23 18:42:38.446783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:38.631 [2024-07-23 18:42:38.446796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.292 ms 00:24:38.631 [2024-07-23 18:42:38.446813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.631 [2024-07-23 18:42:38.451860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:38.631 [2024-07-23 18:42:38.451893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:38.631 [2024-07-23 18:42:38.451906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.035 ms 00:24:38.631 [2024-07-23 18:42:38.451917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.631 [2024-07-23 18:42:38.458088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:38.631 [2024-07-23 18:42:38.458139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:38.631 [2024-07-23 18:42:38.458149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.157 ms 00:24:38.631 [2024-07-23 18:42:38.458160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.631 [2024-07-23 18:42:38.460032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:38.631 [2024-07-23 18:42:38.460072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:38.631 [2024-07-23 18:42:38.460082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.801 ms 00:24:38.631 [2024-07-23 18:42:38.460090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.631 [2024-07-23 18:42:38.463562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:38.631 [2024-07-23 18:42:38.463607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:38.631 [2024-07-23 18:42:38.463618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.456 ms 00:24:38.631 [2024-07-23 18:42:38.463625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.631 [2024-07-23 18:42:38.466559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:38.631 [2024-07-23 18:42:38.466607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:38.631 [2024-07-23 18:42:38.466619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.904 ms 00:24:38.631 [2024-07-23 18:42:38.466635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.631 [2024-07-23 18:42:38.468254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:38.631 [2024-07-23 18:42:38.468287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:24:38.631 [2024-07-23 18:42:38.468297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.604 ms 00:24:38.631 [2024-07-23 18:42:38.468304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.631 [2024-07-23 18:42:38.469640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:38.631 [2024-07-23 18:42:38.469671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:24:38.631 [2024-07-23 18:42:38.469680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.320 ms 00:24:38.631 [2024-07-23 18:42:38.469687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.631 [2024-07-23 18:42:38.470687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:38.631 [2024-07-23 18:42:38.470716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:38.631 [2024-07-23 18:42:38.470725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.986 ms 00:24:38.631 [2024-07-23 18:42:38.470731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.631 [2024-07-23 18:42:38.471633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:38.631 [2024-07-23 18:42:38.471660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:38.631 [2024-07-23 18:42:38.471669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.860 ms 00:24:38.631 [2024-07-23 18:42:38.471675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.631 [2024-07-23 18:42:38.471689] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:38.631 [2024-07-23 18:42:38.471702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:24:38.631 [2024-07-23 18:42:38.471712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3328 / 261120 wr_cnt: 1 state: open 00:24:38.631 [2024-07-23 18:42:38.471721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:38.631 [2024-07-23 18:42:38.471729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:38.631 [2024-07-23 18:42:38.471737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:38.631 [2024-07-23 18:42:38.471745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:38.631 [2024-07-23 18:42:38.471753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:38.631 [2024-07-23 18:42:38.471761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:38.631 [2024-07-23 18:42:38.471768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:38.631 [2024-07-23 18:42:38.471776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:38.632 [2024-07-23 18:42:38.471783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:38.632 [2024-07-23 18:42:38.471791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:38.632 [2024-07-23 18:42:38.471798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:38.632 [2024-07-23 18:42:38.471806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:38.632 [2024-07-23 18:42:38.471813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:38.632 [2024-07-23 18:42:38.471821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:38.632 [2024-07-23 18:42:38.471828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:38.632 [2024-07-23 18:42:38.471835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:38.632 [2024-07-23 18:42:38.471843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:38.632 [2024-07-23 18:42:38.471850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:38.632 [2024-07-23 18:42:38.471858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:38.632 [2024-07-23 18:42:38.471866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:38.632 [2024-07-23 18:42:38.471874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:38.632 [2024-07-23 18:42:38.471882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:38.632 [2024-07-23 18:42:38.471890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:38.632 [2024-07-23 18:42:38.471897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:38.632 [2024-07-23 18:42:38.471906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:38.632 [2024-07-23 18:42:38.471914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:38.632 [2024-07-23 18:42:38.471922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:38.632 [2024-07-23 18:42:38.471930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:38.632 [2024-07-23 18:42:38.471938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:38.632 [2024-07-23 18:42:38.471947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:38.632 [2024-07-23 18:42:38.471954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:38.632 [2024-07-23 18:42:38.471962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:38.632 [2024-07-23 18:42:38.471971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:38.632 [2024-07-23 18:42:38.471979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:38.632 [2024-07-23 18:42:38.471987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:38.632 [2024-07-23 18:42:38.471996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:38.632 [2024-07-23 18:42:38.472004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:38.632 [2024-07-23 18:42:38.472011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:38.632 [2024-07-23 18:42:38.472019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:38.632 [2024-07-23 18:42:38.472026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:38.632 [2024-07-23 18:42:38.472033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:38.632 [2024-07-23 18:42:38.472041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:38.632 [2024-07-23 18:42:38.472049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:38.632 [2024-07-23 18:42:38.472056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:38.632 [2024-07-23 18:42:38.472063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:38.632 [2024-07-23 18:42:38.472071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:38.632 [2024-07-23 18:42:38.472078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:38.632 [2024-07-23 18:42:38.472085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:38.632 [2024-07-23 18:42:38.472092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:38.632 [2024-07-23 18:42:38.472099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:38.632 [2024-07-23 18:42:38.472106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:38.632 [2024-07-23 18:42:38.472114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:38.632 [2024-07-23 18:42:38.472121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:38.632 [2024-07-23 18:42:38.472148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:38.632 [2024-07-23 18:42:38.472156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:38.632 [2024-07-23 18:42:38.472163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:38.632 [2024-07-23 18:42:38.472171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:38.632 [2024-07-23 18:42:38.472179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:38.632 [2024-07-23 18:42:38.472186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:38.632 [2024-07-23 18:42:38.472194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:38.632 [2024-07-23 18:42:38.472201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:38.632 [2024-07-23 18:42:38.472209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:38.632 [2024-07-23 18:42:38.472216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:38.632 [2024-07-23 18:42:38.472224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:38.632 [2024-07-23 18:42:38.472232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:38.632 [2024-07-23 18:42:38.472239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:38.632 [2024-07-23 18:42:38.472247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:38.632 [2024-07-23 18:42:38.472254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:38.632 [2024-07-23 18:42:38.472262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:38.632 [2024-07-23 18:42:38.472269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:38.632 [2024-07-23 18:42:38.472277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:38.632 [2024-07-23 18:42:38.472284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:38.632 [2024-07-23 18:42:38.472292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:38.632 [2024-07-23 18:42:38.472299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:38.632 [2024-07-23 18:42:38.472307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:38.632 [2024-07-23 18:42:38.472315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:38.632 [2024-07-23 18:42:38.472322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:38.632 [2024-07-23 18:42:38.472329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:38.632 [2024-07-23 18:42:38.472335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:38.632 [2024-07-23 18:42:38.472343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:38.632 [2024-07-23 18:42:38.472349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:38.632 [2024-07-23 18:42:38.472356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:38.632 [2024-07-23 18:42:38.472364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:38.632 [2024-07-23 18:42:38.472372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:38.632 [2024-07-23 18:42:38.472381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:38.632 [2024-07-23 18:42:38.472389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:38.632 [2024-07-23 18:42:38.472397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:38.632 [2024-07-23 18:42:38.472404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:38.632 [2024-07-23 18:42:38.472411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:38.632 [2024-07-23 18:42:38.472419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:38.632 [2024-07-23 18:42:38.472426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:38.632 [2024-07-23 18:42:38.472433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:38.632 [2024-07-23 18:42:38.472441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:38.633 [2024-07-23 18:42:38.472449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:38.633 [2024-07-23 18:42:38.472456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:38.633 [2024-07-23 18:42:38.472463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:38.633 [2024-07-23 18:42:38.472471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:38.633 [2024-07-23 18:42:38.472478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:38.633 [2024-07-23 18:42:38.472493] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:38.633 [2024-07-23 18:42:38.472500] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7b21931e-714e-41ab-a824-bda33f9430c9 00:24:38.633 [2024-07-23 18:42:38.472509] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264448 00:24:38.633 [2024-07-23 18:42:38.472535] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:38.633 [2024-07-23 18:42:38.472543] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:38.633 [2024-07-23 18:42:38.472551] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:38.633 [2024-07-23 18:42:38.472558] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:38.633 [2024-07-23 18:42:38.472566] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:38.633 [2024-07-23 18:42:38.472577] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:38.633 [2024-07-23 18:42:38.472732] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:38.633 [2024-07-23 18:42:38.472750] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:38.633 [2024-07-23 18:42:38.472768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:38.633 [2024-07-23 18:42:38.472788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:38.633 [2024-07-23 18:42:38.472807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.082 ms 00:24:38.633 [2024-07-23 18:42:38.472835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.633 [2024-07-23 18:42:38.475649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:38.633 [2024-07-23 18:42:38.475669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:38.633 [2024-07-23 18:42:38.475677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.788 ms 00:24:38.633 [2024-07-23 18:42:38.475687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.633 [2024-07-23 18:42:38.475867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:38.633 [2024-07-23 18:42:38.475876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:38.633 [2024-07-23 18:42:38.475884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.157 ms 00:24:38.633 [2024-07-23 18:42:38.475891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.633 [2024-07-23 18:42:38.485056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:38.633 [2024-07-23 18:42:38.485125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:38.633 [2024-07-23 18:42:38.485163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:38.633 [2024-07-23 18:42:38.485184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.633 [2024-07-23 18:42:38.485253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:38.633 [2024-07-23 18:42:38.485283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:38.633 [2024-07-23 18:42:38.485309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:38.633 [2024-07-23 18:42:38.485352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.633 [2024-07-23 18:42:38.485426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:38.633 [2024-07-23 18:42:38.485469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:38.633 [2024-07-23 18:42:38.485496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:38.633 [2024-07-23 18:42:38.485520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.633 [2024-07-23 18:42:38.485554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:38.633 [2024-07-23 18:42:38.485593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:38.633 [2024-07-23 18:42:38.485619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:38.633 [2024-07-23 18:42:38.485637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.633 [2024-07-23 18:42:38.508338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:38.633 [2024-07-23 18:42:38.508435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:38.633 [2024-07-23 18:42:38.508470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:38.633 [2024-07-23 18:42:38.508503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.633 [2024-07-23 18:42:38.522517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:38.633 [2024-07-23 18:42:38.522635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:38.633 [2024-07-23 18:42:38.522666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:38.633 [2024-07-23 18:42:38.522688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.633 [2024-07-23 18:42:38.522775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:38.633 [2024-07-23 18:42:38.522799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:38.633 [2024-07-23 18:42:38.522831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:38.633 [2024-07-23 18:42:38.522851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.633 [2024-07-23 18:42:38.522903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:38.633 [2024-07-23 18:42:38.522932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:38.633 [2024-07-23 18:42:38.522964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:38.633 [2024-07-23 18:42:38.522983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.633 [2024-07-23 18:42:38.523105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:38.633 [2024-07-23 18:42:38.523143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:38.633 [2024-07-23 18:42:38.523169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:38.633 [2024-07-23 18:42:38.523194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.633 [2024-07-23 18:42:38.523250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:38.633 [2024-07-23 18:42:38.523290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:38.633 [2024-07-23 18:42:38.523315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:38.633 [2024-07-23 18:42:38.523350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.633 [2024-07-23 18:42:38.523413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:38.633 [2024-07-23 18:42:38.523444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:38.633 [2024-07-23 18:42:38.523476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:38.633 [2024-07-23 18:42:38.523531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.633 [2024-07-23 18:42:38.523644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:38.633 [2024-07-23 18:42:38.523678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:38.633 [2024-07-23 18:42:38.523689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:38.633 [2024-07-23 18:42:38.523697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.633 [2024-07-23 18:42:38.523835] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 79.391 ms, result 0 00:24:38.893 00:24:38.893 00:24:38.893 18:42:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:24:40.797 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:24:40.797 18:42:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:24:40.797 18:42:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:24:40.797 18:42:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:40.797 18:42:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:24:40.797 18:42:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:24:40.797 18:42:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:40.797 18:42:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:24:40.797 18:42:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 92064 00:24:40.797 Process with pid 92064 is not found 00:24:40.797 18:42:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@946 -- # '[' -z 92064 ']' 00:24:40.797 18:42:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@950 -- # kill -0 92064 00:24:40.797 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (92064) - No such process 00:24:40.797 18:42:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@973 -- # echo 'Process with pid 92064 is not found' 00:24:40.797 18:42:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:24:41.055 Remove shared memory files 00:24:41.055 18:42:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:24:41.056 18:42:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:24:41.056 18:42:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:24:41.056 18:42:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:24:41.056 18:42:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:24:41.056 18:42:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:24:41.056 18:42:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:24:41.056 ************************************ 00:24:41.056 END TEST ftl_dirty_shutdown 00:24:41.056 ************************************ 00:24:41.056 00:24:41.056 real 2m56.946s 00:24:41.056 user 3m23.083s 00:24:41.056 sys 0m27.143s 00:24:41.056 18:42:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:41.056 18:42:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:41.314 18:42:41 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:24:41.314 18:42:41 ftl -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:24:41.314 18:42:41 ftl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:41.314 18:42:41 ftl -- common/autotest_common.sh@10 -- # set +x 00:24:41.314 ************************************ 00:24:41.314 START TEST ftl_upgrade_shutdown 00:24:41.314 ************************************ 00:24:41.314 18:42:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:24:41.314 * Looking for test storage... 00:24:41.314 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:24:41.314 18:42:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:24:41.314 18:42:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:24:41.314 18:42:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:24:41.314 18:42:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:24:41.314 18:42:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:24:41.314 18:42:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:24:41.314 18:42:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:41.314 18:42:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:24:41.314 18:42:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:24:41.314 18:42:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:41.314 18:42:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:41.314 18:42:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:24:41.314 18:42:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:24:41.314 18:42:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:41.314 18:42:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:41.314 18:42:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:24:41.314 18:42:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:24:41.314 18:42:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:41.314 18:42:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:41.314 18:42:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:24:41.314 18:42:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:24:41.314 18:42:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:41.314 18:42:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:41.314 18:42:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:41.314 18:42:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:41.314 18:42:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:24:41.314 18:42:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:24:41.314 18:42:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:41.314 18:42:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:41.314 18:42:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:41.314 18:42:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:24:41.314 18:42:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:24:41.314 18:42:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:24:41.314 18:42:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:24:41.314 18:42:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:24:41.314 18:42:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:24:41.314 18:42:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:24:41.314 18:42:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:24:41.314 18:42:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:24:41.314 18:42:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:24:41.314 18:42:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:24:41.314 18:42:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:24:41.314 18:42:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:24:41.314 18:42:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:24:41.314 18:42:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:24:41.314 18:42:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:24:41.314 18:42:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=94004 00:24:41.314 18:42:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:24:41.314 18:42:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 94004 00:24:41.314 18:42:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:24:41.314 18:42:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@827 -- # '[' -z 94004 ']' 00:24:41.314 18:42:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:41.314 18:42:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:41.314 18:42:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:41.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:41.314 18:42:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:41.314 18:42:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:41.579 [2024-07-23 18:42:41.399031] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:24:41.579 [2024-07-23 18:42:41.399276] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94004 ] 00:24:41.579 [2024-07-23 18:42:41.545888] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:41.579 [2024-07-23 18:42:41.613457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:42.148 18:42:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:42.148 18:42:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # return 0 00:24:42.148 18:42:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:24:42.148 18:42:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:24:42.148 18:42:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:24:42.148 18:42:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:24:42.148 18:42:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:24:42.148 18:42:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:24:42.148 18:42:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:24:42.148 18:42:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:24:42.148 18:42:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:24:42.148 18:42:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:24:42.148 18:42:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:24:42.148 18:42:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:24:42.148 18:42:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:24:42.148 18:42:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:24:42.148 18:42:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:24:42.148 18:42:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:24:42.148 18:42:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:24:42.148 18:42:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:24:42.148 18:42:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:24:42.148 18:42:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:24:42.148 18:42:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:24:42.406 18:42:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:24:42.406 18:42:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:24:42.406 18:42:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:24:42.406 18:42:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1374 -- # local bdev_name=basen1 00:24:42.406 18:42:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1375 -- # local bdev_info 00:24:42.406 18:42:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1376 -- # local bs 00:24:42.406 18:42:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1377 -- # local nb 00:24:42.406 18:42:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:24:42.664 18:42:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:24:42.664 { 00:24:42.664 "name": "basen1", 00:24:42.664 "aliases": [ 00:24:42.664 "06d4efc7-f04d-47e6-8663-a7e95d5b4205" 00:24:42.664 ], 00:24:42.664 "product_name": "NVMe disk", 00:24:42.664 "block_size": 4096, 00:24:42.664 "num_blocks": 1310720, 00:24:42.664 "uuid": "06d4efc7-f04d-47e6-8663-a7e95d5b4205", 00:24:42.664 "assigned_rate_limits": { 00:24:42.664 "rw_ios_per_sec": 0, 00:24:42.664 "rw_mbytes_per_sec": 0, 00:24:42.664 "r_mbytes_per_sec": 0, 00:24:42.664 "w_mbytes_per_sec": 0 00:24:42.664 }, 00:24:42.664 "claimed": true, 00:24:42.664 "claim_type": "read_many_write_one", 00:24:42.664 "zoned": false, 00:24:42.664 "supported_io_types": { 00:24:42.664 "read": true, 00:24:42.664 "write": true, 00:24:42.664 "unmap": true, 00:24:42.664 "write_zeroes": true, 00:24:42.664 "flush": true, 00:24:42.664 "reset": true, 00:24:42.664 "compare": true, 00:24:42.664 "compare_and_write": false, 00:24:42.664 "abort": true, 00:24:42.664 "nvme_admin": true, 00:24:42.664 "nvme_io": true 00:24:42.664 }, 00:24:42.664 "driver_specific": { 00:24:42.664 "nvme": [ 00:24:42.664 { 00:24:42.664 "pci_address": "0000:00:11.0", 00:24:42.664 "trid": { 00:24:42.664 "trtype": "PCIe", 00:24:42.664 "traddr": "0000:00:11.0" 00:24:42.664 }, 00:24:42.664 "ctrlr_data": { 00:24:42.664 "cntlid": 0, 00:24:42.664 "vendor_id": "0x1b36", 00:24:42.664 "model_number": "QEMU NVMe Ctrl", 00:24:42.664 "serial_number": "12341", 00:24:42.664 "firmware_revision": "8.0.0", 00:24:42.664 "subnqn": "nqn.2019-08.org.qemu:12341", 00:24:42.664 "oacs": { 00:24:42.664 "security": 0, 00:24:42.664 "format": 1, 00:24:42.664 "firmware": 0, 00:24:42.664 "ns_manage": 1 00:24:42.664 }, 00:24:42.664 "multi_ctrlr": false, 00:24:42.664 "ana_reporting": false 00:24:42.664 }, 00:24:42.664 "vs": { 00:24:42.664 "nvme_version": "1.4" 00:24:42.664 }, 00:24:42.664 "ns_data": { 00:24:42.664 "id": 1, 00:24:42.664 "can_share": false 00:24:42.664 } 00:24:42.664 } 00:24:42.664 ], 00:24:42.664 "mp_policy": "active_passive" 00:24:42.664 } 00:24:42.664 } 00:24:42.664 ]' 00:24:42.664 18:42:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:24:42.664 18:42:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # bs=4096 00:24:42.664 18:42:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:24:42.664 18:42:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # nb=1310720 00:24:42.664 18:42:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bdev_size=5120 00:24:42.664 18:42:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # echo 5120 00:24:42.664 18:42:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:24:42.664 18:42:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:24:42.664 18:42:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:24:42.664 18:42:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:42.664 18:42:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:24:42.921 18:42:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=2bb06a59-9376-4cad-ab82-0e3263c2e228 00:24:42.921 18:42:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:24:42.921 18:42:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2bb06a59-9376-4cad-ab82-0e3263c2e228 00:24:43.179 18:42:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:24:43.179 18:42:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=12262268-3ef1-4d4b-abd9-570247e98b44 00:24:43.179 18:42:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 12262268-3ef1-4d4b-abd9-570247e98b44 00:24:43.437 18:42:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=4e6f1079-b4b3-4a8e-926f-1cb8e1987130 00:24:43.437 18:42:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 4e6f1079-b4b3-4a8e-926f-1cb8e1987130 ]] 00:24:43.437 18:42:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 4e6f1079-b4b3-4a8e-926f-1cb8e1987130 5120 00:24:43.437 18:42:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:24:43.437 18:42:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:24:43.437 18:42:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=4e6f1079-b4b3-4a8e-926f-1cb8e1987130 00:24:43.437 18:42:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:24:43.437 18:42:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 4e6f1079-b4b3-4a8e-926f-1cb8e1987130 00:24:43.437 18:42:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1374 -- # local bdev_name=4e6f1079-b4b3-4a8e-926f-1cb8e1987130 00:24:43.437 18:42:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1375 -- # local bdev_info 00:24:43.437 18:42:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1376 -- # local bs 00:24:43.437 18:42:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1377 -- # local nb 00:24:43.437 18:42:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4e6f1079-b4b3-4a8e-926f-1cb8e1987130 00:24:43.695 18:42:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:24:43.695 { 00:24:43.695 "name": "4e6f1079-b4b3-4a8e-926f-1cb8e1987130", 00:24:43.695 "aliases": [ 00:24:43.695 "lvs/basen1p0" 00:24:43.695 ], 00:24:43.695 "product_name": "Logical Volume", 00:24:43.695 "block_size": 4096, 00:24:43.695 "num_blocks": 5242880, 00:24:43.695 "uuid": "4e6f1079-b4b3-4a8e-926f-1cb8e1987130", 00:24:43.695 "assigned_rate_limits": { 00:24:43.695 "rw_ios_per_sec": 0, 00:24:43.695 "rw_mbytes_per_sec": 0, 00:24:43.695 "r_mbytes_per_sec": 0, 00:24:43.695 "w_mbytes_per_sec": 0 00:24:43.695 }, 00:24:43.695 "claimed": false, 00:24:43.695 "zoned": false, 00:24:43.695 "supported_io_types": { 00:24:43.695 "read": true, 00:24:43.695 "write": true, 00:24:43.695 "unmap": true, 00:24:43.695 "write_zeroes": true, 00:24:43.695 "flush": false, 00:24:43.695 "reset": true, 00:24:43.695 "compare": false, 00:24:43.695 "compare_and_write": false, 00:24:43.695 "abort": false, 00:24:43.695 "nvme_admin": false, 00:24:43.695 "nvme_io": false 00:24:43.695 }, 00:24:43.695 "driver_specific": { 00:24:43.695 "lvol": { 00:24:43.695 "lvol_store_uuid": "12262268-3ef1-4d4b-abd9-570247e98b44", 00:24:43.695 "base_bdev": "basen1", 00:24:43.695 "thin_provision": true, 00:24:43.695 "num_allocated_clusters": 0, 00:24:43.695 "snapshot": false, 00:24:43.695 "clone": false, 00:24:43.695 "esnap_clone": false 00:24:43.695 } 00:24:43.695 } 00:24:43.695 } 00:24:43.695 ]' 00:24:43.695 18:42:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:24:43.695 18:42:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # bs=4096 00:24:43.695 18:42:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:24:43.695 18:42:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # nb=5242880 00:24:43.695 18:42:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bdev_size=20480 00:24:43.695 18:42:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # echo 20480 00:24:43.695 18:42:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:24:43.695 18:42:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:24:43.695 18:42:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:24:43.954 18:42:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:24:43.954 18:42:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:24:43.954 18:42:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:24:44.213 18:42:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:24:44.213 18:42:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:24:44.213 18:42:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 4e6f1079-b4b3-4a8e-926f-1cb8e1987130 -c cachen1p0 --l2p_dram_limit 2 00:24:44.213 [2024-07-23 18:42:44.249457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:44.213 [2024-07-23 18:42:44.249512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:24:44.213 [2024-07-23 18:42:44.249529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:24:44.213 [2024-07-23 18:42:44.249537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:44.213 [2024-07-23 18:42:44.249609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:44.213 [2024-07-23 18:42:44.249620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:24:44.213 [2024-07-23 18:42:44.249631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.046 ms 00:24:44.213 [2024-07-23 18:42:44.249641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:44.213 [2024-07-23 18:42:44.249665] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:24:44.213 [2024-07-23 18:42:44.249877] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:24:44.213 [2024-07-23 18:42:44.249899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:44.213 [2024-07-23 18:42:44.249911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:24:44.213 [2024-07-23 18:42:44.249922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.242 ms 00:24:44.213 [2024-07-23 18:42:44.249931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:44.213 [2024-07-23 18:42:44.249993] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 6ed67cd7-2f04-4189-9459-a207b87110cf 00:24:44.213 [2024-07-23 18:42:44.252389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:44.213 [2024-07-23 18:42:44.252425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:24:44.213 [2024-07-23 18:42:44.252443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:24:44.213 [2024-07-23 18:42:44.252457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:44.473 [2024-07-23 18:42:44.266704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:44.473 [2024-07-23 18:42:44.266811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:24:44.473 [2024-07-23 18:42:44.266845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.224 ms 00:24:44.473 [2024-07-23 18:42:44.266871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:44.474 [2024-07-23 18:42:44.266973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:44.474 [2024-07-23 18:42:44.267023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:24:44.474 [2024-07-23 18:42:44.267036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.044 ms 00:24:44.474 [2024-07-23 18:42:44.267056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:44.474 [2024-07-23 18:42:44.267128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:44.474 [2024-07-23 18:42:44.267142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:24:44.474 [2024-07-23 18:42:44.267152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:24:44.474 [2024-07-23 18:42:44.267162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:44.474 [2024-07-23 18:42:44.267187] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:24:44.474 [2024-07-23 18:42:44.270054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:44.474 [2024-07-23 18:42:44.270092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:24:44.474 [2024-07-23 18:42:44.270105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.877 ms 00:24:44.474 [2024-07-23 18:42:44.270113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:44.474 [2024-07-23 18:42:44.270146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:44.474 [2024-07-23 18:42:44.270154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:24:44.474 [2024-07-23 18:42:44.270164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:24:44.474 [2024-07-23 18:42:44.270171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:44.474 [2024-07-23 18:42:44.270194] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:24:44.474 [2024-07-23 18:42:44.270338] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:24:44.474 [2024-07-23 18:42:44.270354] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:24:44.474 [2024-07-23 18:42:44.270364] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:24:44.474 [2024-07-23 18:42:44.270377] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:24:44.474 [2024-07-23 18:42:44.270386] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:24:44.474 [2024-07-23 18:42:44.270403] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:24:44.474 [2024-07-23 18:42:44.270411] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:24:44.474 [2024-07-23 18:42:44.270423] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:24:44.474 [2024-07-23 18:42:44.270430] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:24:44.474 [2024-07-23 18:42:44.270440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:44.474 [2024-07-23 18:42:44.270447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:24:44.474 [2024-07-23 18:42:44.270457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.252 ms 00:24:44.474 [2024-07-23 18:42:44.270464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:44.474 [2024-07-23 18:42:44.270532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:44.474 [2024-07-23 18:42:44.270540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:24:44.474 [2024-07-23 18:42:44.270553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.049 ms 00:24:44.474 [2024-07-23 18:42:44.270560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:44.474 [2024-07-23 18:42:44.270655] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:24:44.474 [2024-07-23 18:42:44.270665] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:24:44.474 [2024-07-23 18:42:44.270675] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:24:44.474 [2024-07-23 18:42:44.270684] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:44.474 [2024-07-23 18:42:44.270695] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:24:44.474 [2024-07-23 18:42:44.270702] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:24:44.474 [2024-07-23 18:42:44.270711] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:24:44.474 [2024-07-23 18:42:44.270718] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:24:44.474 [2024-07-23 18:42:44.270727] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:24:44.474 [2024-07-23 18:42:44.270733] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:44.474 [2024-07-23 18:42:44.270741] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:24:44.474 [2024-07-23 18:42:44.270749] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:24:44.474 [2024-07-23 18:42:44.270758] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:44.474 [2024-07-23 18:42:44.270764] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:24:44.474 [2024-07-23 18:42:44.270775] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:24:44.474 [2024-07-23 18:42:44.270782] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:44.474 [2024-07-23 18:42:44.270790] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:24:44.474 [2024-07-23 18:42:44.270797] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:24:44.474 [2024-07-23 18:42:44.270805] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:44.474 [2024-07-23 18:42:44.270812] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:24:44.474 [2024-07-23 18:42:44.270820] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:24:44.474 [2024-07-23 18:42:44.270826] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:24:44.474 [2024-07-23 18:42:44.270834] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:24:44.474 [2024-07-23 18:42:44.270841] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:24:44.474 [2024-07-23 18:42:44.270850] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:24:44.474 [2024-07-23 18:42:44.270857] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:24:44.474 [2024-07-23 18:42:44.270865] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:24:44.474 [2024-07-23 18:42:44.270882] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:24:44.474 [2024-07-23 18:42:44.270892] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:24:44.474 [2024-07-23 18:42:44.270898] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:24:44.474 [2024-07-23 18:42:44.270910] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:24:44.474 [2024-07-23 18:42:44.270916] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:24:44.474 [2024-07-23 18:42:44.270924] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:24:44.474 [2024-07-23 18:42:44.270930] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:44.474 [2024-07-23 18:42:44.270938] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:24:44.474 [2024-07-23 18:42:44.270945] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:24:44.474 [2024-07-23 18:42:44.270954] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:44.474 [2024-07-23 18:42:44.270959] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:24:44.474 [2024-07-23 18:42:44.270968] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:24:44.474 [2024-07-23 18:42:44.270974] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:44.474 [2024-07-23 18:42:44.270982] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:24:44.474 [2024-07-23 18:42:44.270988] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:24:44.474 [2024-07-23 18:42:44.270996] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:44.474 [2024-07-23 18:42:44.271001] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:24:44.474 [2024-07-23 18:42:44.271019] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:24:44.474 [2024-07-23 18:42:44.271026] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:24:44.474 [2024-07-23 18:42:44.271038] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:44.474 [2024-07-23 18:42:44.271048] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:24:44.474 [2024-07-23 18:42:44.271057] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:24:44.474 [2024-07-23 18:42:44.271064] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:24:44.474 [2024-07-23 18:42:44.271074] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:24:44.474 [2024-07-23 18:42:44.271081] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:24:44.474 [2024-07-23 18:42:44.271090] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:24:44.474 [2024-07-23 18:42:44.271101] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:24:44.474 [2024-07-23 18:42:44.271113] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:44.475 [2024-07-23 18:42:44.271124] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:24:44.475 [2024-07-23 18:42:44.271134] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:24:44.475 [2024-07-23 18:42:44.271141] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:24:44.475 [2024-07-23 18:42:44.271151] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:24:44.475 [2024-07-23 18:42:44.271183] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:24:44.475 [2024-07-23 18:42:44.271195] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:24:44.475 [2024-07-23 18:42:44.271202] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:24:44.475 [2024-07-23 18:42:44.271215] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:24:44.475 [2024-07-23 18:42:44.271223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:24:44.475 [2024-07-23 18:42:44.271233] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:24:44.475 [2024-07-23 18:42:44.271240] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:24:44.475 [2024-07-23 18:42:44.271250] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:24:44.475 [2024-07-23 18:42:44.271258] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:24:44.475 [2024-07-23 18:42:44.271267] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:24:44.475 [2024-07-23 18:42:44.271274] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:24:44.475 [2024-07-23 18:42:44.271284] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:44.475 [2024-07-23 18:42:44.271292] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:44.475 [2024-07-23 18:42:44.271302] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:24:44.475 [2024-07-23 18:42:44.271311] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:24:44.475 [2024-07-23 18:42:44.271320] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:24:44.475 [2024-07-23 18:42:44.271329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:44.475 [2024-07-23 18:42:44.271340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:24:44.475 [2024-07-23 18:42:44.271349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.725 ms 00:24:44.475 [2024-07-23 18:42:44.271361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:44.475 [2024-07-23 18:42:44.271430] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:24:44.475 [2024-07-23 18:42:44.271443] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:24:47.767 [2024-07-23 18:42:47.796978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:47.767 [2024-07-23 18:42:47.797055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:24:47.767 [2024-07-23 18:42:47.797071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3532.343 ms 00:24:47.767 [2024-07-23 18:42:47.797082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:47.767 [2024-07-23 18:42:47.816402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:47.767 [2024-07-23 18:42:47.816461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:24:47.767 [2024-07-23 18:42:47.816476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.216 ms 00:24:47.767 [2024-07-23 18:42:47.816488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:47.767 [2024-07-23 18:42:47.816542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:47.767 [2024-07-23 18:42:47.816559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:24:47.767 [2024-07-23 18:42:47.816585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:24:47.767 [2024-07-23 18:42:47.816597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:48.027 [2024-07-23 18:42:47.833068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:48.027 [2024-07-23 18:42:47.833113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:24:48.027 [2024-07-23 18:42:47.833125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.444 ms 00:24:48.027 [2024-07-23 18:42:47.833135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:48.027 [2024-07-23 18:42:47.833175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:48.027 [2024-07-23 18:42:47.833186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:24:48.027 [2024-07-23 18:42:47.833206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:24:48.027 [2024-07-23 18:42:47.833217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:48.028 [2024-07-23 18:42:47.834055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:48.028 [2024-07-23 18:42:47.834077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:24:48.028 [2024-07-23 18:42:47.834086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.784 ms 00:24:48.028 [2024-07-23 18:42:47.834098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:48.028 [2024-07-23 18:42:47.834150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:48.028 [2024-07-23 18:42:47.834166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:24:48.028 [2024-07-23 18:42:47.834175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:24:48.028 [2024-07-23 18:42:47.834186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:48.028 [2024-07-23 18:42:47.846160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:48.028 [2024-07-23 18:42:47.846199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:24:48.028 [2024-07-23 18:42:47.846210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.978 ms 00:24:48.028 [2024-07-23 18:42:47.846221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:48.028 [2024-07-23 18:42:47.855387] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:24:48.028 [2024-07-23 18:42:47.857093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:48.028 [2024-07-23 18:42:47.857116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:24:48.028 [2024-07-23 18:42:47.857128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.786 ms 00:24:48.028 [2024-07-23 18:42:47.857136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:48.028 [2024-07-23 18:42:47.884205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:48.028 [2024-07-23 18:42:47.884246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:24:48.028 [2024-07-23 18:42:47.884262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.085 ms 00:24:48.028 [2024-07-23 18:42:47.884274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:48.028 [2024-07-23 18:42:47.884379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:48.028 [2024-07-23 18:42:47.884390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:24:48.028 [2024-07-23 18:42:47.884403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.059 ms 00:24:48.028 [2024-07-23 18:42:47.884412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:48.028 [2024-07-23 18:42:47.887364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:48.028 [2024-07-23 18:42:47.887398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:24:48.028 [2024-07-23 18:42:47.887413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.919 ms 00:24:48.028 [2024-07-23 18:42:47.887425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:48.028 [2024-07-23 18:42:47.890277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:48.028 [2024-07-23 18:42:47.890308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:24:48.028 [2024-07-23 18:42:47.890323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.816 ms 00:24:48.028 [2024-07-23 18:42:47.890331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:48.028 [2024-07-23 18:42:47.890649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:48.028 [2024-07-23 18:42:47.890662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:24:48.028 [2024-07-23 18:42:47.890692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.271 ms 00:24:48.028 [2024-07-23 18:42:47.890700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:48.028 [2024-07-23 18:42:47.934629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:48.028 [2024-07-23 18:42:47.934673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:24:48.028 [2024-07-23 18:42:47.934690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 43.980 ms 00:24:48.028 [2024-07-23 18:42:47.934701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:48.028 [2024-07-23 18:42:47.939932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:48.028 [2024-07-23 18:42:47.939966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:24:48.028 [2024-07-23 18:42:47.939981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.197 ms 00:24:48.028 [2024-07-23 18:42:47.939990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:48.028 [2024-07-23 18:42:47.943276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:48.028 [2024-07-23 18:42:47.943305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:24:48.028 [2024-07-23 18:42:47.943317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.251 ms 00:24:48.028 [2024-07-23 18:42:47.943325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:48.028 [2024-07-23 18:42:47.946657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:48.028 [2024-07-23 18:42:47.946686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:24:48.028 [2024-07-23 18:42:47.946699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.301 ms 00:24:48.028 [2024-07-23 18:42:47.946707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:48.028 [2024-07-23 18:42:47.946752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:48.028 [2024-07-23 18:42:47.946762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:24:48.028 [2024-07-23 18:42:47.946773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:24:48.028 [2024-07-23 18:42:47.946781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:48.028 [2024-07-23 18:42:47.946849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:48.028 [2024-07-23 18:42:47.946858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:24:48.028 [2024-07-23 18:42:47.946869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:24:48.028 [2024-07-23 18:42:47.946876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:48.028 [2024-07-23 18:42:47.948320] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3705.525 ms, result 0 00:24:48.028 { 00:24:48.028 "name": "ftl", 00:24:48.028 "uuid": "6ed67cd7-2f04-4189-9459-a207b87110cf" 00:24:48.028 } 00:24:48.028 18:42:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:24:48.288 [2024-07-23 18:42:48.135787] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:48.288 18:42:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:24:48.546 18:42:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:24:48.546 [2024-07-23 18:42:48.527522] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:24:48.546 18:42:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:24:48.805 [2024-07-23 18:42:48.711523] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:24:48.805 18:42:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:24:49.063 18:42:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:24:49.063 18:42:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:24:49.063 18:42:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:24:49.063 18:42:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:24:49.063 18:42:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:24:49.063 18:42:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:24:49.063 18:42:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:24:49.063 18:42:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:24:49.063 18:42:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:24:49.063 18:42:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:24:49.063 18:42:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:24:49.063 Fill FTL, iteration 1 00:24:49.063 18:42:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:24:49.063 18:42:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:24:49.063 18:42:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:24:49.063 18:42:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:24:49.063 18:42:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:24:49.063 18:42:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:24:49.063 18:42:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=94126 00:24:49.063 18:42:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:24:49.063 18:42:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 94126 /var/tmp/spdk.tgt.sock 00:24:49.063 18:42:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@827 -- # '[' -z 94126 ']' 00:24:49.063 18:42:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:24:49.063 18:42:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:49.063 18:42:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:24:49.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:24:49.063 18:42:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:49.063 18:42:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:49.322 [2024-07-23 18:42:49.119165] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:24:49.322 [2024-07-23 18:42:49.119361] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94126 ] 00:24:49.322 [2024-07-23 18:42:49.265861] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:49.322 [2024-07-23 18:42:49.316178] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:49.889 18:42:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:49.889 18:42:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # return 0 00:24:49.889 18:42:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:24:50.148 ftln1 00:24:50.148 18:42:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:24:50.148 18:42:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:24:50.408 18:42:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:24:50.408 18:42:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 94126 00:24:50.408 18:42:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@946 -- # '[' -z 94126 ']' 00:24:50.408 18:42:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # kill -0 94126 00:24:50.408 18:42:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@951 -- # uname 00:24:50.408 18:42:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:50.408 18:42:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 94126 00:24:50.408 18:42:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:24:50.408 killing process with pid 94126 00:24:50.408 18:42:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:24:50.408 18:42:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # echo 'killing process with pid 94126' 00:24:50.408 18:42:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@965 -- # kill 94126 00:24:50.408 18:42:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@970 -- # wait 94126 00:24:50.976 18:42:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:24:50.976 18:42:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:24:50.976 [2024-07-23 18:42:50.823333] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:24:50.976 [2024-07-23 18:42:50.823477] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94151 ] 00:24:50.976 [2024-07-23 18:42:50.952550] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:50.976 [2024-07-23 18:42:51.005378] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:55.428  Copying: 250/1024 [MB] (250 MBps) Copying: 517/1024 [MB] (267 MBps) Copying: 776/1024 [MB] (259 MBps) Copying: 1024/1024 [MB] (average 260 MBps) 00:24:55.428 00:24:55.428 18:42:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:24:55.428 18:42:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:24:55.428 Calculate MD5 checksum, iteration 1 00:24:55.428 18:42:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:24:55.428 18:42:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:24:55.428 18:42:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:24:55.428 18:42:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:24:55.428 18:42:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:24:55.428 18:42:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:24:55.428 [2024-07-23 18:42:55.455048] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:24:55.428 [2024-07-23 18:42:55.455209] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94204 ] 00:24:55.687 [2024-07-23 18:42:55.608102] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:55.687 [2024-07-23 18:42:55.655915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:57.897  Copying: 586/1024 [MB] (586 MBps) Copying: 1024/1024 [MB] (average 578 MBps) 00:24:57.897 00:24:57.897 18:42:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:24:57.897 18:42:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:24:59.806 18:42:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:24:59.806 Fill FTL, iteration 2 00:24:59.806 18:42:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=e357908d06b454e8392d25d79acb1556 00:24:59.806 18:42:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:24:59.806 18:42:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:24:59.806 18:42:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:24:59.806 18:42:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:24:59.806 18:42:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:24:59.806 18:42:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:24:59.806 18:42:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:24:59.806 18:42:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:24:59.806 18:42:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:24:59.806 [2024-07-23 18:42:59.715745] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:24:59.806 [2024-07-23 18:42:59.715981] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94254 ] 00:25:00.066 [2024-07-23 18:42:59.859750] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:00.066 [2024-07-23 18:42:59.911660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:04.255  Copying: 245/1024 [MB] (245 MBps) Copying: 503/1024 [MB] (258 MBps) Copying: 768/1024 [MB] (265 MBps) Copying: 1024/1024 [MB] (average 258 MBps) 00:25:04.255 00:25:04.513 18:43:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:25:04.513 18:43:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:25:04.513 Calculate MD5 checksum, iteration 2 00:25:04.513 18:43:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:25:04.514 18:43:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:04.514 18:43:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:04.514 18:43:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:04.514 18:43:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:25:04.514 18:43:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:25:04.514 [2024-07-23 18:43:04.384515] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:25:04.514 [2024-07-23 18:43:04.384728] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94301 ] 00:25:04.514 [2024-07-23 18:43:04.531796] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:04.772 [2024-07-23 18:43:04.580241] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:09.251  Copying: 584/1024 [MB] (584 MBps) Copying: 1024/1024 [MB] (average 571 MBps) 00:25:09.251 00:25:09.251 18:43:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:25:09.251 18:43:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:25:11.156 18:43:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:25:11.156 18:43:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=756e3cc2fdc4bb22397aed0eaacdf0b3 00:25:11.156 18:43:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:25:11.156 18:43:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:25:11.156 18:43:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:25:11.156 [2024-07-23 18:43:10.860315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:11.156 [2024-07-23 18:43:10.860508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:25:11.156 [2024-07-23 18:43:10.860548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:25:11.156 [2024-07-23 18:43:10.860579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:11.156 [2024-07-23 18:43:10.860632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:11.156 [2024-07-23 18:43:10.860664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:25:11.156 [2024-07-23 18:43:10.860700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:25:11.156 [2024-07-23 18:43:10.860721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:11.156 [2024-07-23 18:43:10.860792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:11.156 [2024-07-23 18:43:10.860826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:25:11.156 [2024-07-23 18:43:10.860857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:25:11.156 [2024-07-23 18:43:10.860891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:11.156 [2024-07-23 18:43:10.860977] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.660 ms, result 0 00:25:11.156 true 00:25:11.156 18:43:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:25:11.156 { 00:25:11.156 "name": "ftl", 00:25:11.156 "properties": [ 00:25:11.156 { 00:25:11.156 "name": "superblock_version", 00:25:11.156 "value": 5, 00:25:11.156 "read-only": true 00:25:11.156 }, 00:25:11.156 { 00:25:11.156 "name": "base_device", 00:25:11.156 "bands": [ 00:25:11.156 { 00:25:11.156 "id": 0, 00:25:11.156 "state": "FREE", 00:25:11.156 "validity": 0.0 00:25:11.156 }, 00:25:11.156 { 00:25:11.156 "id": 1, 00:25:11.156 "state": "FREE", 00:25:11.156 "validity": 0.0 00:25:11.156 }, 00:25:11.156 { 00:25:11.156 "id": 2, 00:25:11.156 "state": "FREE", 00:25:11.156 "validity": 0.0 00:25:11.156 }, 00:25:11.156 { 00:25:11.156 "id": 3, 00:25:11.156 "state": "FREE", 00:25:11.156 "validity": 0.0 00:25:11.156 }, 00:25:11.156 { 00:25:11.156 "id": 4, 00:25:11.156 "state": "FREE", 00:25:11.156 "validity": 0.0 00:25:11.156 }, 00:25:11.156 { 00:25:11.156 "id": 5, 00:25:11.156 "state": "FREE", 00:25:11.156 "validity": 0.0 00:25:11.156 }, 00:25:11.156 { 00:25:11.156 "id": 6, 00:25:11.156 "state": "FREE", 00:25:11.156 "validity": 0.0 00:25:11.156 }, 00:25:11.156 { 00:25:11.156 "id": 7, 00:25:11.156 "state": "FREE", 00:25:11.156 "validity": 0.0 00:25:11.156 }, 00:25:11.156 { 00:25:11.156 "id": 8, 00:25:11.156 "state": "FREE", 00:25:11.156 "validity": 0.0 00:25:11.156 }, 00:25:11.156 { 00:25:11.156 "id": 9, 00:25:11.156 "state": "FREE", 00:25:11.156 "validity": 0.0 00:25:11.156 }, 00:25:11.156 { 00:25:11.156 "id": 10, 00:25:11.156 "state": "FREE", 00:25:11.156 "validity": 0.0 00:25:11.156 }, 00:25:11.156 { 00:25:11.156 "id": 11, 00:25:11.156 "state": "FREE", 00:25:11.156 "validity": 0.0 00:25:11.156 }, 00:25:11.156 { 00:25:11.156 "id": 12, 00:25:11.156 "state": "FREE", 00:25:11.156 "validity": 0.0 00:25:11.156 }, 00:25:11.156 { 00:25:11.156 "id": 13, 00:25:11.156 "state": "FREE", 00:25:11.156 "validity": 0.0 00:25:11.156 }, 00:25:11.156 { 00:25:11.156 "id": 14, 00:25:11.156 "state": "FREE", 00:25:11.156 "validity": 0.0 00:25:11.156 }, 00:25:11.156 { 00:25:11.156 "id": 15, 00:25:11.156 "state": "FREE", 00:25:11.156 "validity": 0.0 00:25:11.156 }, 00:25:11.156 { 00:25:11.156 "id": 16, 00:25:11.156 "state": "FREE", 00:25:11.156 "validity": 0.0 00:25:11.156 }, 00:25:11.156 { 00:25:11.156 "id": 17, 00:25:11.156 "state": "FREE", 00:25:11.156 "validity": 0.0 00:25:11.156 } 00:25:11.156 ], 00:25:11.156 "read-only": true 00:25:11.156 }, 00:25:11.156 { 00:25:11.156 "name": "cache_device", 00:25:11.156 "type": "bdev", 00:25:11.156 "chunks": [ 00:25:11.156 { 00:25:11.156 "id": 0, 00:25:11.156 "state": "INACTIVE", 00:25:11.156 "utilization": 0.0 00:25:11.156 }, 00:25:11.156 { 00:25:11.156 "id": 1, 00:25:11.156 "state": "CLOSED", 00:25:11.156 "utilization": 1.0 00:25:11.156 }, 00:25:11.156 { 00:25:11.156 "id": 2, 00:25:11.156 "state": "CLOSED", 00:25:11.156 "utilization": 1.0 00:25:11.156 }, 00:25:11.156 { 00:25:11.156 "id": 3, 00:25:11.156 "state": "OPEN", 00:25:11.156 "utilization": 0.001953125 00:25:11.156 }, 00:25:11.156 { 00:25:11.156 "id": 4, 00:25:11.156 "state": "OPEN", 00:25:11.156 "utilization": 0.0 00:25:11.156 } 00:25:11.156 ], 00:25:11.156 "read-only": true 00:25:11.156 }, 00:25:11.156 { 00:25:11.156 "name": "verbose_mode", 00:25:11.156 "value": true, 00:25:11.156 "unit": "", 00:25:11.156 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:25:11.156 }, 00:25:11.156 { 00:25:11.156 "name": "prep_upgrade_on_shutdown", 00:25:11.156 "value": false, 00:25:11.156 "unit": "", 00:25:11.156 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:25:11.156 } 00:25:11.157 ] 00:25:11.157 } 00:25:11.157 18:43:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:25:11.414 [2024-07-23 18:43:11.261594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:11.414 [2024-07-23 18:43:11.261683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:25:11.414 [2024-07-23 18:43:11.261698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:25:11.414 [2024-07-23 18:43:11.261706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:11.414 [2024-07-23 18:43:11.261736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:11.414 [2024-07-23 18:43:11.261746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:25:11.414 [2024-07-23 18:43:11.261755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:25:11.414 [2024-07-23 18:43:11.261762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:11.414 [2024-07-23 18:43:11.261781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:11.414 [2024-07-23 18:43:11.261789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:25:11.414 [2024-07-23 18:43:11.261797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:25:11.414 [2024-07-23 18:43:11.261805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:11.414 [2024-07-23 18:43:11.261865] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.285 ms, result 0 00:25:11.414 true 00:25:11.414 18:43:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:25:11.414 18:43:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:25:11.414 18:43:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:25:11.673 18:43:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:25:11.673 18:43:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:25:11.673 18:43:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:25:11.673 [2024-07-23 18:43:11.652495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:11.673 [2024-07-23 18:43:11.652565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:25:11.673 [2024-07-23 18:43:11.652605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:25:11.673 [2024-07-23 18:43:11.652613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:11.673 [2024-07-23 18:43:11.652643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:11.673 [2024-07-23 18:43:11.652653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:25:11.673 [2024-07-23 18:43:11.652661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:25:11.673 [2024-07-23 18:43:11.652681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:11.673 [2024-07-23 18:43:11.652700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:11.673 [2024-07-23 18:43:11.652724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:25:11.673 [2024-07-23 18:43:11.652734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:25:11.673 [2024-07-23 18:43:11.652742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:11.673 [2024-07-23 18:43:11.652834] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.320 ms, result 0 00:25:11.673 true 00:25:11.673 18:43:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:25:11.932 { 00:25:11.932 "name": "ftl", 00:25:11.932 "properties": [ 00:25:11.932 { 00:25:11.932 "name": "superblock_version", 00:25:11.932 "value": 5, 00:25:11.932 "read-only": true 00:25:11.932 }, 00:25:11.932 { 00:25:11.932 "name": "base_device", 00:25:11.932 "bands": [ 00:25:11.932 { 00:25:11.932 "id": 0, 00:25:11.932 "state": "FREE", 00:25:11.932 "validity": 0.0 00:25:11.932 }, 00:25:11.932 { 00:25:11.932 "id": 1, 00:25:11.932 "state": "FREE", 00:25:11.932 "validity": 0.0 00:25:11.932 }, 00:25:11.932 { 00:25:11.932 "id": 2, 00:25:11.932 "state": "FREE", 00:25:11.932 "validity": 0.0 00:25:11.932 }, 00:25:11.932 { 00:25:11.932 "id": 3, 00:25:11.932 "state": "FREE", 00:25:11.932 "validity": 0.0 00:25:11.932 }, 00:25:11.932 { 00:25:11.932 "id": 4, 00:25:11.932 "state": "FREE", 00:25:11.933 "validity": 0.0 00:25:11.933 }, 00:25:11.933 { 00:25:11.933 "id": 5, 00:25:11.933 "state": "FREE", 00:25:11.933 "validity": 0.0 00:25:11.933 }, 00:25:11.933 { 00:25:11.933 "id": 6, 00:25:11.933 "state": "FREE", 00:25:11.933 "validity": 0.0 00:25:11.933 }, 00:25:11.933 { 00:25:11.933 "id": 7, 00:25:11.933 "state": "FREE", 00:25:11.933 "validity": 0.0 00:25:11.933 }, 00:25:11.933 { 00:25:11.933 "id": 8, 00:25:11.933 "state": "FREE", 00:25:11.933 "validity": 0.0 00:25:11.933 }, 00:25:11.933 { 00:25:11.933 "id": 9, 00:25:11.933 "state": "FREE", 00:25:11.933 "validity": 0.0 00:25:11.933 }, 00:25:11.933 { 00:25:11.933 "id": 10, 00:25:11.933 "state": "FREE", 00:25:11.933 "validity": 0.0 00:25:11.933 }, 00:25:11.933 { 00:25:11.933 "id": 11, 00:25:11.933 "state": "FREE", 00:25:11.933 "validity": 0.0 00:25:11.933 }, 00:25:11.933 { 00:25:11.933 "id": 12, 00:25:11.933 "state": "FREE", 00:25:11.933 "validity": 0.0 00:25:11.933 }, 00:25:11.933 { 00:25:11.933 "id": 13, 00:25:11.933 "state": "FREE", 00:25:11.933 "validity": 0.0 00:25:11.933 }, 00:25:11.933 { 00:25:11.933 "id": 14, 00:25:11.933 "state": "FREE", 00:25:11.933 "validity": 0.0 00:25:11.933 }, 00:25:11.933 { 00:25:11.933 "id": 15, 00:25:11.933 "state": "FREE", 00:25:11.933 "validity": 0.0 00:25:11.933 }, 00:25:11.933 { 00:25:11.933 "id": 16, 00:25:11.933 "state": "FREE", 00:25:11.933 "validity": 0.0 00:25:11.933 }, 00:25:11.933 { 00:25:11.933 "id": 17, 00:25:11.933 "state": "FREE", 00:25:11.933 "validity": 0.0 00:25:11.933 } 00:25:11.933 ], 00:25:11.933 "read-only": true 00:25:11.933 }, 00:25:11.933 { 00:25:11.933 "name": "cache_device", 00:25:11.933 "type": "bdev", 00:25:11.933 "chunks": [ 00:25:11.933 { 00:25:11.933 "id": 0, 00:25:11.933 "state": "INACTIVE", 00:25:11.933 "utilization": 0.0 00:25:11.933 }, 00:25:11.933 { 00:25:11.933 "id": 1, 00:25:11.933 "state": "CLOSED", 00:25:11.933 "utilization": 1.0 00:25:11.933 }, 00:25:11.933 { 00:25:11.933 "id": 2, 00:25:11.933 "state": "CLOSED", 00:25:11.933 "utilization": 1.0 00:25:11.933 }, 00:25:11.933 { 00:25:11.933 "id": 3, 00:25:11.933 "state": "OPEN", 00:25:11.933 "utilization": 0.001953125 00:25:11.933 }, 00:25:11.933 { 00:25:11.933 "id": 4, 00:25:11.933 "state": "OPEN", 00:25:11.933 "utilization": 0.0 00:25:11.933 } 00:25:11.933 ], 00:25:11.933 "read-only": true 00:25:11.933 }, 00:25:11.933 { 00:25:11.933 "name": "verbose_mode", 00:25:11.933 "value": true, 00:25:11.933 "unit": "", 00:25:11.933 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:25:11.933 }, 00:25:11.933 { 00:25:11.933 "name": "prep_upgrade_on_shutdown", 00:25:11.933 "value": true, 00:25:11.933 "unit": "", 00:25:11.933 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:25:11.933 } 00:25:11.933 ] 00:25:11.933 } 00:25:11.933 18:43:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:25:11.933 18:43:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 94004 ]] 00:25:11.933 18:43:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 94004 00:25:11.933 18:43:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@946 -- # '[' -z 94004 ']' 00:25:11.933 18:43:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # kill -0 94004 00:25:11.933 18:43:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@951 -- # uname 00:25:11.933 18:43:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:11.933 18:43:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 94004 00:25:11.933 killing process with pid 94004 00:25:11.933 18:43:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:11.933 18:43:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:11.933 18:43:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # echo 'killing process with pid 94004' 00:25:11.933 18:43:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@965 -- # kill 94004 00:25:11.933 18:43:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@970 -- # wait 94004 00:25:12.192 [2024-07-23 18:43:12.138768] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:25:12.192 [2024-07-23 18:43:12.144069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:12.192 [2024-07-23 18:43:12.144110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:25:12.192 [2024-07-23 18:43:12.144126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:25:12.192 [2024-07-23 18:43:12.144135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:12.192 [2024-07-23 18:43:12.144160] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:25:12.192 [2024-07-23 18:43:12.145430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:12.192 [2024-07-23 18:43:12.145454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:25:12.192 [2024-07-23 18:43:12.145463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.259 ms 00:25:12.192 [2024-07-23 18:43:12.145471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:20.351 [2024-07-23 18:43:19.329485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:20.351 [2024-07-23 18:43:19.329566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:25:20.351 [2024-07-23 18:43:19.329594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7197.833 ms 00:25:20.351 [2024-07-23 18:43:19.329603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:20.351 [2024-07-23 18:43:19.330786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:20.351 [2024-07-23 18:43:19.330819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:25:20.351 [2024-07-23 18:43:19.330830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.146 ms 00:25:20.351 [2024-07-23 18:43:19.330839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:20.351 [2024-07-23 18:43:19.331818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:20.351 [2024-07-23 18:43:19.331841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:25:20.351 [2024-07-23 18:43:19.331852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.957 ms 00:25:20.351 [2024-07-23 18:43:19.331860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:20.351 [2024-07-23 18:43:19.334558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:20.351 [2024-07-23 18:43:19.334598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:25:20.351 [2024-07-23 18:43:19.334610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.661 ms 00:25:20.351 [2024-07-23 18:43:19.334618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:20.351 [2024-07-23 18:43:19.336885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:20.351 [2024-07-23 18:43:19.336932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:25:20.351 [2024-07-23 18:43:19.336943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.241 ms 00:25:20.351 [2024-07-23 18:43:19.336952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:20.351 [2024-07-23 18:43:19.337016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:20.351 [2024-07-23 18:43:19.337026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:25:20.351 [2024-07-23 18:43:19.337035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:25:20.351 [2024-07-23 18:43:19.337049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:20.351 [2024-07-23 18:43:19.338315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:20.351 [2024-07-23 18:43:19.338348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:25:20.351 [2024-07-23 18:43:19.338359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.252 ms 00:25:20.351 [2024-07-23 18:43:19.338367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:20.351 [2024-07-23 18:43:19.339704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:20.351 [2024-07-23 18:43:19.339734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:25:20.351 [2024-07-23 18:43:19.339744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.315 ms 00:25:20.351 [2024-07-23 18:43:19.339752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:20.351 [2024-07-23 18:43:19.340869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:20.351 [2024-07-23 18:43:19.340898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:25:20.351 [2024-07-23 18:43:19.340907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.094 ms 00:25:20.351 [2024-07-23 18:43:19.340914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:20.351 [2024-07-23 18:43:19.342068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:20.352 [2024-07-23 18:43:19.342098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:25:20.352 [2024-07-23 18:43:19.342108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.091 ms 00:25:20.352 [2024-07-23 18:43:19.342115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:20.352 [2024-07-23 18:43:19.342139] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:25:20.352 [2024-07-23 18:43:19.342153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:25:20.352 [2024-07-23 18:43:19.342177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:25:20.352 [2024-07-23 18:43:19.342187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:25:20.352 [2024-07-23 18:43:19.342196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:20.352 [2024-07-23 18:43:19.342205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:20.352 [2024-07-23 18:43:19.342213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:20.352 [2024-07-23 18:43:19.342222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:20.352 [2024-07-23 18:43:19.342230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:20.352 [2024-07-23 18:43:19.342238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:20.352 [2024-07-23 18:43:19.342246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:20.352 [2024-07-23 18:43:19.342255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:20.352 [2024-07-23 18:43:19.342264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:20.352 [2024-07-23 18:43:19.342271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:20.352 [2024-07-23 18:43:19.342279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:20.352 [2024-07-23 18:43:19.342287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:20.352 [2024-07-23 18:43:19.342295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:20.352 [2024-07-23 18:43:19.342304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:20.352 [2024-07-23 18:43:19.342312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:20.352 [2024-07-23 18:43:19.342323] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:25:20.352 [2024-07-23 18:43:19.342331] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 6ed67cd7-2f04-4189-9459-a207b87110cf 00:25:20.352 [2024-07-23 18:43:19.342340] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:25:20.352 [2024-07-23 18:43:19.342348] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:25:20.352 [2024-07-23 18:43:19.342355] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:25:20.352 [2024-07-23 18:43:19.342365] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:25:20.352 [2024-07-23 18:43:19.342373] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:25:20.352 [2024-07-23 18:43:19.342383] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:25:20.352 [2024-07-23 18:43:19.342395] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:25:20.352 [2024-07-23 18:43:19.342402] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:25:20.352 [2024-07-23 18:43:19.342409] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:25:20.352 [2024-07-23 18:43:19.342420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:20.352 [2024-07-23 18:43:19.342429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:25:20.352 [2024-07-23 18:43:19.342438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.281 ms 00:25:20.352 [2024-07-23 18:43:19.342446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:20.352 [2024-07-23 18:43:19.345470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:20.352 [2024-07-23 18:43:19.345491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:25:20.352 [2024-07-23 18:43:19.345500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.013 ms 00:25:20.352 [2024-07-23 18:43:19.345509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:20.352 [2024-07-23 18:43:19.345696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:20.352 [2024-07-23 18:43:19.345706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:25:20.352 [2024-07-23 18:43:19.345714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.163 ms 00:25:20.352 [2024-07-23 18:43:19.345722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:20.352 [2024-07-23 18:43:19.356317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:20.352 [2024-07-23 18:43:19.356345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:25:20.352 [2024-07-23 18:43:19.356358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:20.352 [2024-07-23 18:43:19.356366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:20.352 [2024-07-23 18:43:19.356414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:20.352 [2024-07-23 18:43:19.356424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:25:20.352 [2024-07-23 18:43:19.356433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:20.352 [2024-07-23 18:43:19.356441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:20.352 [2024-07-23 18:43:19.356499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:20.352 [2024-07-23 18:43:19.356511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:25:20.352 [2024-07-23 18:43:19.356519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:20.352 [2024-07-23 18:43:19.356526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:20.352 [2024-07-23 18:43:19.356548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:20.352 [2024-07-23 18:43:19.356556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:25:20.352 [2024-07-23 18:43:19.356565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:20.352 [2024-07-23 18:43:19.356586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:20.352 [2024-07-23 18:43:19.381051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:20.352 [2024-07-23 18:43:19.381086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:25:20.352 [2024-07-23 18:43:19.381098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:20.352 [2024-07-23 18:43:19.381107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:20.352 [2024-07-23 18:43:19.394257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:20.352 [2024-07-23 18:43:19.394290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:25:20.352 [2024-07-23 18:43:19.394303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:20.352 [2024-07-23 18:43:19.394311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:20.352 [2024-07-23 18:43:19.394395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:20.352 [2024-07-23 18:43:19.394405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:25:20.352 [2024-07-23 18:43:19.394414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:20.352 [2024-07-23 18:43:19.394423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:20.352 [2024-07-23 18:43:19.394464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:20.352 [2024-07-23 18:43:19.394481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:25:20.352 [2024-07-23 18:43:19.394489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:20.352 [2024-07-23 18:43:19.394508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:20.352 [2024-07-23 18:43:19.394613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:20.352 [2024-07-23 18:43:19.394631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:25:20.352 [2024-07-23 18:43:19.394641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:20.352 [2024-07-23 18:43:19.394656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:20.352 [2024-07-23 18:43:19.394700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:20.352 [2024-07-23 18:43:19.394711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:25:20.352 [2024-07-23 18:43:19.394724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:20.352 [2024-07-23 18:43:19.394739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:20.352 [2024-07-23 18:43:19.394783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:20.352 [2024-07-23 18:43:19.394797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:25:20.352 [2024-07-23 18:43:19.394807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:20.352 [2024-07-23 18:43:19.394816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:20.352 [2024-07-23 18:43:19.394887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:20.352 [2024-07-23 18:43:19.394902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:25:20.352 [2024-07-23 18:43:19.394911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:20.352 [2024-07-23 18:43:19.394919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:20.352 [2024-07-23 18:43:19.395054] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7264.932 ms, result 0 00:25:22.888 18:43:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:25:22.888 18:43:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:25:22.888 18:43:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:25:22.888 18:43:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:25:22.889 18:43:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:25:22.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:22.889 18:43:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=94512 00:25:22.889 18:43:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:25:22.889 18:43:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 94512 00:25:22.889 18:43:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:22.889 18:43:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@827 -- # '[' -z 94512 ']' 00:25:22.889 18:43:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:22.889 18:43:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:22.889 18:43:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:22.889 18:43:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:22.889 18:43:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:23.148 [2024-07-23 18:43:23.015466] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:25:23.148 [2024-07-23 18:43:23.015655] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94512 ] 00:25:23.148 [2024-07-23 18:43:23.161164] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:23.407 [2024-07-23 18:43:23.233125] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:23.666 [2024-07-23 18:43:23.639604] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:25:23.666 [2024-07-23 18:43:23.639677] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:25:23.925 [2024-07-23 18:43:23.776173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:23.925 [2024-07-23 18:43:23.776221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:25:23.925 [2024-07-23 18:43:23.776235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:25:23.925 [2024-07-23 18:43:23.776243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:23.925 [2024-07-23 18:43:23.776308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:23.925 [2024-07-23 18:43:23.776317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:25:23.925 [2024-07-23 18:43:23.776327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:25:23.925 [2024-07-23 18:43:23.776339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:23.925 [2024-07-23 18:43:23.776360] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:25:23.925 [2024-07-23 18:43:23.776550] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:25:23.925 [2024-07-23 18:43:23.776564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:23.925 [2024-07-23 18:43:23.776590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:25:23.925 [2024-07-23 18:43:23.776615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.210 ms 00:25:23.925 [2024-07-23 18:43:23.776623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:23.925 [2024-07-23 18:43:23.779013] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:25:23.925 [2024-07-23 18:43:23.782555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:23.925 [2024-07-23 18:43:23.782596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:25:23.925 [2024-07-23 18:43:23.782611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.546 ms 00:25:23.925 [2024-07-23 18:43:23.782619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:23.925 [2024-07-23 18:43:23.782676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:23.925 [2024-07-23 18:43:23.782687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:25:23.925 [2024-07-23 18:43:23.782695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:25:23.925 [2024-07-23 18:43:23.782702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:23.925 [2024-07-23 18:43:23.794970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:23.925 [2024-07-23 18:43:23.794994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:25:23.925 [2024-07-23 18:43:23.795008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.246 ms 00:25:23.925 [2024-07-23 18:43:23.795016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:23.925 [2024-07-23 18:43:23.795064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:23.925 [2024-07-23 18:43:23.795076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:25:23.925 [2024-07-23 18:43:23.795084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:25:23.925 [2024-07-23 18:43:23.795097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:23.925 [2024-07-23 18:43:23.795157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:23.925 [2024-07-23 18:43:23.795167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:25:23.925 [2024-07-23 18:43:23.795175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:25:23.925 [2024-07-23 18:43:23.795182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:23.925 [2024-07-23 18:43:23.795209] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:25:23.925 [2024-07-23 18:43:23.797945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:23.925 [2024-07-23 18:43:23.797971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:25:23.925 [2024-07-23 18:43:23.797980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.749 ms 00:25:23.925 [2024-07-23 18:43:23.797994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:23.925 [2024-07-23 18:43:23.798026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:23.925 [2024-07-23 18:43:23.798045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:25:23.925 [2024-07-23 18:43:23.798053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:25:23.925 [2024-07-23 18:43:23.798061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:23.925 [2024-07-23 18:43:23.798095] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:25:23.925 [2024-07-23 18:43:23.798118] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:25:23.925 [2024-07-23 18:43:23.798164] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:25:23.925 [2024-07-23 18:43:23.798183] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x168 bytes 00:25:23.926 [2024-07-23 18:43:23.798265] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:25:23.926 [2024-07-23 18:43:23.798274] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:25:23.926 [2024-07-23 18:43:23.798284] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:25:23.926 [2024-07-23 18:43:23.798293] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:25:23.926 [2024-07-23 18:43:23.798302] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:25:23.926 [2024-07-23 18:43:23.798311] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:25:23.926 [2024-07-23 18:43:23.798326] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:25:23.926 [2024-07-23 18:43:23.798336] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:25:23.926 [2024-07-23 18:43:23.798344] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:25:23.926 [2024-07-23 18:43:23.798364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:23.926 [2024-07-23 18:43:23.798373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:25:23.926 [2024-07-23 18:43:23.798381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.273 ms 00:25:23.926 [2024-07-23 18:43:23.798388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:23.926 [2024-07-23 18:43:23.798454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:23.926 [2024-07-23 18:43:23.798469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:25:23.926 [2024-07-23 18:43:23.798484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.049 ms 00:25:23.926 [2024-07-23 18:43:23.798491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:23.926 [2024-07-23 18:43:23.798593] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:25:23.926 [2024-07-23 18:43:23.798608] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:25:23.926 [2024-07-23 18:43:23.798616] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:25:23.926 [2024-07-23 18:43:23.798624] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:23.926 [2024-07-23 18:43:23.798632] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:25:23.926 [2024-07-23 18:43:23.798641] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:25:23.926 [2024-07-23 18:43:23.798648] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:25:23.926 [2024-07-23 18:43:23.798657] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:25:23.926 [2024-07-23 18:43:23.798665] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:25:23.926 [2024-07-23 18:43:23.798672] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:23.926 [2024-07-23 18:43:23.798679] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:25:23.926 [2024-07-23 18:43:23.798686] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:25:23.926 [2024-07-23 18:43:23.798693] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:23.926 [2024-07-23 18:43:23.798699] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:25:23.926 [2024-07-23 18:43:23.798706] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:25:23.926 [2024-07-23 18:43:23.798712] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:23.926 [2024-07-23 18:43:23.798718] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:25:23.926 [2024-07-23 18:43:23.798725] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:25:23.926 [2024-07-23 18:43:23.798731] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:23.926 [2024-07-23 18:43:23.798738] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:25:23.926 [2024-07-23 18:43:23.798744] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:25:23.926 [2024-07-23 18:43:23.798753] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:23.926 [2024-07-23 18:43:23.798759] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:25:23.926 [2024-07-23 18:43:23.798764] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:25:23.926 [2024-07-23 18:43:23.798771] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:23.926 [2024-07-23 18:43:23.798777] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:25:23.926 [2024-07-23 18:43:23.798783] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:25:23.926 [2024-07-23 18:43:23.798789] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:23.926 [2024-07-23 18:43:23.798796] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:25:23.926 [2024-07-23 18:43:23.798802] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:25:23.926 [2024-07-23 18:43:23.798808] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:23.926 [2024-07-23 18:43:23.798814] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:25:23.926 [2024-07-23 18:43:23.798821] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:25:23.926 [2024-07-23 18:43:23.798827] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:23.926 [2024-07-23 18:43:23.798833] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:25:23.926 [2024-07-23 18:43:23.798840] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:25:23.926 [2024-07-23 18:43:23.798846] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:23.926 [2024-07-23 18:43:23.798855] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:25:23.926 [2024-07-23 18:43:23.798861] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:25:23.926 [2024-07-23 18:43:23.798875] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:23.926 [2024-07-23 18:43:23.798882] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:25:23.926 [2024-07-23 18:43:23.798888] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:25:23.926 [2024-07-23 18:43:23.798895] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:23.926 [2024-07-23 18:43:23.798901] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:25:23.926 [2024-07-23 18:43:23.798909] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:25:23.926 [2024-07-23 18:43:23.798918] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:25:23.926 [2024-07-23 18:43:23.798925] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:23.926 [2024-07-23 18:43:23.798932] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:25:23.926 [2024-07-23 18:43:23.798939] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:25:23.926 [2024-07-23 18:43:23.798946] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:25:23.926 [2024-07-23 18:43:23.798952] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:25:23.926 [2024-07-23 18:43:23.798961] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:25:23.926 [2024-07-23 18:43:23.798967] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:25:23.926 [2024-07-23 18:43:23.798982] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:25:23.926 [2024-07-23 18:43:23.798991] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:23.926 [2024-07-23 18:43:23.799001] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:25:23.926 [2024-07-23 18:43:23.799008] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:25:23.926 [2024-07-23 18:43:23.799015] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:25:23.926 [2024-07-23 18:43:23.799022] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:25:23.926 [2024-07-23 18:43:23.799029] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:25:23.926 [2024-07-23 18:43:23.799036] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:25:23.926 [2024-07-23 18:43:23.799043] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:25:23.926 [2024-07-23 18:43:23.799050] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:25:23.926 [2024-07-23 18:43:23.799057] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:25:23.926 [2024-07-23 18:43:23.799064] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:25:23.926 [2024-07-23 18:43:23.799071] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:25:23.926 [2024-07-23 18:43:23.799078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:25:23.926 [2024-07-23 18:43:23.799086] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:25:23.926 [2024-07-23 18:43:23.799093] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:25:23.926 [2024-07-23 18:43:23.799102] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:25:23.926 [2024-07-23 18:43:23.799113] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:23.926 [2024-07-23 18:43:23.799122] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:23.926 [2024-07-23 18:43:23.799130] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:25:23.926 [2024-07-23 18:43:23.799138] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:25:23.926 [2024-07-23 18:43:23.799144] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:25:23.926 [2024-07-23 18:43:23.799152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:23.926 [2024-07-23 18:43:23.799160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:25:23.926 [2024-07-23 18:43:23.799177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.623 ms 00:25:23.926 [2024-07-23 18:43:23.799184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:23.926 [2024-07-23 18:43:23.799262] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:25:23.927 [2024-07-23 18:43:23.799282] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:25:27.217 [2024-07-23 18:43:27.150066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.217 [2024-07-23 18:43:27.150247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:25:27.217 [2024-07-23 18:43:27.150297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3357.263 ms 00:25:27.217 [2024-07-23 18:43:27.150316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.217 [2024-07-23 18:43:27.169487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.217 [2024-07-23 18:43:27.169661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:25:27.217 [2024-07-23 18:43:27.169720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.028 ms 00:25:27.217 [2024-07-23 18:43:27.169761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.217 [2024-07-23 18:43:27.169883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.217 [2024-07-23 18:43:27.169919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:25:27.217 [2024-07-23 18:43:27.169952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:25:27.217 [2024-07-23 18:43:27.170001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.217 [2024-07-23 18:43:27.186399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.217 [2024-07-23 18:43:27.186509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:25:27.217 [2024-07-23 18:43:27.186540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.345 ms 00:25:27.217 [2024-07-23 18:43:27.186560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.217 [2024-07-23 18:43:27.186680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.217 [2024-07-23 18:43:27.186726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:25:27.217 [2024-07-23 18:43:27.186768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:25:27.217 [2024-07-23 18:43:27.186807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.217 [2024-07-23 18:43:27.187640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.217 [2024-07-23 18:43:27.187689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:25:27.217 [2024-07-23 18:43:27.187718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.763 ms 00:25:27.217 [2024-07-23 18:43:27.187743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.217 [2024-07-23 18:43:27.187812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.217 [2024-07-23 18:43:27.187858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:25:27.217 [2024-07-23 18:43:27.187890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:25:27.217 [2024-07-23 18:43:27.187917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.217 [2024-07-23 18:43:27.199841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.217 [2024-07-23 18:43:27.199916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:25:27.217 [2024-07-23 18:43:27.199946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.899 ms 00:25:27.217 [2024-07-23 18:43:27.199973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.217 [2024-07-23 18:43:27.203707] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:25:27.217 [2024-07-23 18:43:27.203791] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:25:27.217 [2024-07-23 18:43:27.203868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.217 [2024-07-23 18:43:27.203909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:25:27.217 [2024-07-23 18:43:27.203935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.743 ms 00:25:27.217 [2024-07-23 18:43:27.203960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.217 [2024-07-23 18:43:27.207428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.217 [2024-07-23 18:43:27.207490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:25:27.217 [2024-07-23 18:43:27.207536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.414 ms 00:25:27.217 [2024-07-23 18:43:27.207591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.217 [2024-07-23 18:43:27.209025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.217 [2024-07-23 18:43:27.209087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:25:27.217 [2024-07-23 18:43:27.209116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.368 ms 00:25:27.217 [2024-07-23 18:43:27.209141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.217 [2024-07-23 18:43:27.210554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.217 [2024-07-23 18:43:27.210624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:25:27.217 [2024-07-23 18:43:27.210654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.351 ms 00:25:27.217 [2024-07-23 18:43:27.210678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.217 [2024-07-23 18:43:27.210993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.217 [2024-07-23 18:43:27.211050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:25:27.217 [2024-07-23 18:43:27.211082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.208 ms 00:25:27.217 [2024-07-23 18:43:27.211130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.217 [2024-07-23 18:43:27.250838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.217 [2024-07-23 18:43:27.251017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:25:27.217 [2024-07-23 18:43:27.251054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 39.734 ms 00:25:27.217 [2024-07-23 18:43:27.251074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.217 [2024-07-23 18:43:27.257161] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:25:27.217 [2024-07-23 18:43:27.258202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.217 [2024-07-23 18:43:27.258254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:25:27.217 [2024-07-23 18:43:27.258285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.067 ms 00:25:27.217 [2024-07-23 18:43:27.258324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.218 [2024-07-23 18:43:27.258408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.218 [2024-07-23 18:43:27.258442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:25:27.218 [2024-07-23 18:43:27.258469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:25:27.218 [2024-07-23 18:43:27.258502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.218 [2024-07-23 18:43:27.258585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.218 [2024-07-23 18:43:27.258623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:25:27.218 [2024-07-23 18:43:27.258649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:25:27.218 [2024-07-23 18:43:27.258677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.218 [2024-07-23 18:43:27.258723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.218 [2024-07-23 18:43:27.258755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:25:27.218 [2024-07-23 18:43:27.258780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:25:27.218 [2024-07-23 18:43:27.258821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.218 [2024-07-23 18:43:27.258881] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:25:27.218 [2024-07-23 18:43:27.258911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.218 [2024-07-23 18:43:27.258935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:25:27.218 [2024-07-23 18:43:27.258974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:25:27.218 [2024-07-23 18:43:27.258999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.218 [2024-07-23 18:43:27.263234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.218 [2024-07-23 18:43:27.263302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:25:27.218 [2024-07-23 18:43:27.263347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.200 ms 00:25:27.218 [2024-07-23 18:43:27.263376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.218 [2024-07-23 18:43:27.263484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.218 [2024-07-23 18:43:27.263560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:25:27.218 [2024-07-23 18:43:27.263603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:25:27.218 [2024-07-23 18:43:27.263641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.218 [2024-07-23 18:43:27.265146] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3495.153 ms, result 0 00:25:27.476 [2024-07-23 18:43:27.278657] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:27.476 [2024-07-23 18:43:27.294617] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:25:27.476 [2024-07-23 18:43:27.302715] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:25:27.476 18:43:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:27.476 18:43:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # return 0 00:25:27.476 18:43:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:25:27.476 18:43:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:25:27.476 18:43:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:25:27.736 [2024-07-23 18:43:27.566201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.737 [2024-07-23 18:43:27.566287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:25:27.737 [2024-07-23 18:43:27.566317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:25:27.737 [2024-07-23 18:43:27.566338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.737 [2024-07-23 18:43:27.566373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.737 [2024-07-23 18:43:27.566394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:25:27.737 [2024-07-23 18:43:27.566411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:25:27.737 [2024-07-23 18:43:27.566428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.737 [2024-07-23 18:43:27.566484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:27.737 [2024-07-23 18:43:27.566504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:25:27.737 [2024-07-23 18:43:27.566534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:25:27.737 [2024-07-23 18:43:27.566552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:27.737 [2024-07-23 18:43:27.566681] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.470 ms, result 0 00:25:27.737 true 00:25:27.737 18:43:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:25:27.737 { 00:25:27.737 "name": "ftl", 00:25:27.737 "properties": [ 00:25:27.737 { 00:25:27.737 "name": "superblock_version", 00:25:27.737 "value": 5, 00:25:27.737 "read-only": true 00:25:27.737 }, 00:25:27.737 { 00:25:27.737 "name": "base_device", 00:25:27.737 "bands": [ 00:25:27.737 { 00:25:27.737 "id": 0, 00:25:27.737 "state": "CLOSED", 00:25:27.737 "validity": 1.0 00:25:27.737 }, 00:25:27.737 { 00:25:27.737 "id": 1, 00:25:27.737 "state": "CLOSED", 00:25:27.737 "validity": 1.0 00:25:27.737 }, 00:25:27.737 { 00:25:27.737 "id": 2, 00:25:27.737 "state": "CLOSED", 00:25:27.737 "validity": 0.007843137254901933 00:25:27.737 }, 00:25:27.737 { 00:25:27.737 "id": 3, 00:25:27.737 "state": "FREE", 00:25:27.737 "validity": 0.0 00:25:27.737 }, 00:25:27.737 { 00:25:27.737 "id": 4, 00:25:27.737 "state": "FREE", 00:25:27.737 "validity": 0.0 00:25:27.737 }, 00:25:27.737 { 00:25:27.737 "id": 5, 00:25:27.737 "state": "FREE", 00:25:27.737 "validity": 0.0 00:25:27.737 }, 00:25:27.737 { 00:25:27.737 "id": 6, 00:25:27.737 "state": "FREE", 00:25:27.737 "validity": 0.0 00:25:27.737 }, 00:25:27.737 { 00:25:27.737 "id": 7, 00:25:27.737 "state": "FREE", 00:25:27.737 "validity": 0.0 00:25:27.737 }, 00:25:27.737 { 00:25:27.737 "id": 8, 00:25:27.737 "state": "FREE", 00:25:27.737 "validity": 0.0 00:25:27.737 }, 00:25:27.737 { 00:25:27.737 "id": 9, 00:25:27.737 "state": "FREE", 00:25:27.737 "validity": 0.0 00:25:27.737 }, 00:25:27.737 { 00:25:27.737 "id": 10, 00:25:27.737 "state": "FREE", 00:25:27.737 "validity": 0.0 00:25:27.737 }, 00:25:27.737 { 00:25:27.737 "id": 11, 00:25:27.737 "state": "FREE", 00:25:27.737 "validity": 0.0 00:25:27.737 }, 00:25:27.737 { 00:25:27.737 "id": 12, 00:25:27.737 "state": "FREE", 00:25:27.737 "validity": 0.0 00:25:27.737 }, 00:25:27.737 { 00:25:27.737 "id": 13, 00:25:27.737 "state": "FREE", 00:25:27.737 "validity": 0.0 00:25:27.737 }, 00:25:27.737 { 00:25:27.737 "id": 14, 00:25:27.737 "state": "FREE", 00:25:27.737 "validity": 0.0 00:25:27.737 }, 00:25:27.737 { 00:25:27.737 "id": 15, 00:25:27.737 "state": "FREE", 00:25:27.737 "validity": 0.0 00:25:27.737 }, 00:25:27.737 { 00:25:27.737 "id": 16, 00:25:27.737 "state": "FREE", 00:25:27.737 "validity": 0.0 00:25:27.737 }, 00:25:27.737 { 00:25:27.737 "id": 17, 00:25:27.737 "state": "FREE", 00:25:27.737 "validity": 0.0 00:25:27.737 } 00:25:27.737 ], 00:25:27.737 "read-only": true 00:25:27.737 }, 00:25:27.737 { 00:25:27.737 "name": "cache_device", 00:25:27.737 "type": "bdev", 00:25:27.737 "chunks": [ 00:25:27.737 { 00:25:27.737 "id": 0, 00:25:27.737 "state": "INACTIVE", 00:25:27.737 "utilization": 0.0 00:25:27.737 }, 00:25:27.737 { 00:25:27.737 "id": 1, 00:25:27.737 "state": "OPEN", 00:25:27.737 "utilization": 0.0 00:25:27.737 }, 00:25:27.737 { 00:25:27.737 "id": 2, 00:25:27.737 "state": "OPEN", 00:25:27.737 "utilization": 0.0 00:25:27.737 }, 00:25:27.737 { 00:25:27.737 "id": 3, 00:25:27.737 "state": "FREE", 00:25:27.737 "utilization": 0.0 00:25:27.737 }, 00:25:27.737 { 00:25:27.737 "id": 4, 00:25:27.737 "state": "FREE", 00:25:27.737 "utilization": 0.0 00:25:27.737 } 00:25:27.737 ], 00:25:27.737 "read-only": true 00:25:27.737 }, 00:25:27.737 { 00:25:27.737 "name": "verbose_mode", 00:25:27.737 "value": true, 00:25:27.737 "unit": "", 00:25:27.737 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:25:27.737 }, 00:25:27.737 { 00:25:27.737 "name": "prep_upgrade_on_shutdown", 00:25:27.737 "value": false, 00:25:27.737 "unit": "", 00:25:27.737 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:25:27.737 } 00:25:27.737 ] 00:25:27.737 } 00:25:28.003 18:43:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:25:28.003 18:43:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:25:28.003 18:43:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:25:28.003 18:43:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:25:28.003 18:43:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:25:28.003 18:43:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:25:28.003 18:43:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:25:28.003 18:43:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:25:28.273 Validate MD5 checksum, iteration 1 00:25:28.273 18:43:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:25:28.273 18:43:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:25:28.273 18:43:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:25:28.273 18:43:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:25:28.273 18:43:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:25:28.273 18:43:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:25:28.273 18:43:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:25:28.273 18:43:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:25:28.273 18:43:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:28.273 18:43:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:28.273 18:43:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:28.273 18:43:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:25:28.273 18:43:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:25:28.273 [2024-07-23 18:43:28.297622] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:25:28.273 [2024-07-23 18:43:28.297849] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94586 ] 00:25:28.532 [2024-07-23 18:43:28.445479] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:28.532 [2024-07-23 18:43:28.498466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:31.418  Copying: 578/1024 [MB] (578 MBps) Copying: 1024/1024 [MB] (average 573 MBps) 00:25:31.418 00:25:31.418 18:43:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:25:31.418 18:43:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:25:33.325 18:43:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:25:33.325 Validate MD5 checksum, iteration 2 00:25:33.325 18:43:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=e357908d06b454e8392d25d79acb1556 00:25:33.325 18:43:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ e357908d06b454e8392d25d79acb1556 != \e\3\5\7\9\0\8\d\0\6\b\4\5\4\e\8\3\9\2\d\2\5\d\7\9\a\c\b\1\5\5\6 ]] 00:25:33.325 18:43:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:25:33.325 18:43:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:25:33.325 18:43:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:25:33.325 18:43:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:25:33.325 18:43:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:33.325 18:43:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:33.325 18:43:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:33.325 18:43:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:25:33.325 18:43:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:25:33.325 [2024-07-23 18:43:33.219315] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:25:33.325 [2024-07-23 18:43:33.219519] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94637 ] 00:25:33.325 [2024-07-23 18:43:33.364923] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:33.584 [2024-07-23 18:43:33.413147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:36.100  Copying: 582/1024 [MB] (582 MBps) Copying: 1024/1024 [MB] (average 564 MBps) 00:25:36.100 00:25:36.100 18:43:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:25:36.100 18:43:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:25:38.003 18:43:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:25:38.003 18:43:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=756e3cc2fdc4bb22397aed0eaacdf0b3 00:25:38.003 18:43:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 756e3cc2fdc4bb22397aed0eaacdf0b3 != \7\5\6\e\3\c\c\2\f\d\c\4\b\b\2\2\3\9\7\a\e\d\0\e\a\a\c\d\f\0\b\3 ]] 00:25:38.003 18:43:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:25:38.003 18:43:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:25:38.003 18:43:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:25:38.003 18:43:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 94512 ]] 00:25:38.003 18:43:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 94512 00:25:38.003 18:43:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:25:38.003 18:43:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:25:38.003 18:43:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:25:38.003 18:43:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:25:38.003 18:43:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:25:38.003 18:43:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:38.003 18:43:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=94693 00:25:38.003 18:43:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:25:38.003 18:43:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 94693 00:25:38.003 18:43:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@827 -- # '[' -z 94693 ']' 00:25:38.003 18:43:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:38.003 18:43:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:38.003 18:43:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:38.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:38.003 18:43:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:38.003 18:43:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:38.003 [2024-07-23 18:43:37.963897] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:25:38.003 [2024-07-23 18:43:37.965034] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94693 ] 00:25:38.262 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 826: 94512 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:25:38.262 [2024-07-23 18:43:38.129632] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:38.262 [2024-07-23 18:43:38.203355] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:38.832 [2024-07-23 18:43:38.614393] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:25:38.832 [2024-07-23 18:43:38.614470] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:25:38.832 [2024-07-23 18:43:38.751146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:38.832 [2024-07-23 18:43:38.751194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:25:38.832 [2024-07-23 18:43:38.751208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:25:38.832 [2024-07-23 18:43:38.751215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:38.832 [2024-07-23 18:43:38.751265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:38.832 [2024-07-23 18:43:38.751274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:25:38.832 [2024-07-23 18:43:38.751281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:25:38.832 [2024-07-23 18:43:38.751293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:38.832 [2024-07-23 18:43:38.751311] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:25:38.832 [2024-07-23 18:43:38.751521] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:25:38.832 [2024-07-23 18:43:38.751544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:38.832 [2024-07-23 18:43:38.751552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:25:38.832 [2024-07-23 18:43:38.751559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.238 ms 00:25:38.832 [2024-07-23 18:43:38.751566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:38.832 [2024-07-23 18:43:38.751831] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:25:38.832 [2024-07-23 18:43:38.757840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:38.832 [2024-07-23 18:43:38.757874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:25:38.832 [2024-07-23 18:43:38.757885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.021 ms 00:25:38.832 [2024-07-23 18:43:38.757899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:38.832 [2024-07-23 18:43:38.759211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:38.832 [2024-07-23 18:43:38.759237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:25:38.832 [2024-07-23 18:43:38.759253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:25:38.832 [2024-07-23 18:43:38.759267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:38.832 [2024-07-23 18:43:38.759502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:38.832 [2024-07-23 18:43:38.759520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:25:38.832 [2024-07-23 18:43:38.759528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.186 ms 00:25:38.832 [2024-07-23 18:43:38.759551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:38.832 [2024-07-23 18:43:38.759600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:38.832 [2024-07-23 18:43:38.759617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:25:38.832 [2024-07-23 18:43:38.759632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:25:38.832 [2024-07-23 18:43:38.759640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:38.832 [2024-07-23 18:43:38.759684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:38.832 [2024-07-23 18:43:38.759693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:25:38.833 [2024-07-23 18:43:38.759700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:25:38.833 [2024-07-23 18:43:38.759707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:38.833 [2024-07-23 18:43:38.759729] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:25:38.833 [2024-07-23 18:43:38.760432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:38.833 [2024-07-23 18:43:38.760446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:25:38.833 [2024-07-23 18:43:38.760458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.709 ms 00:25:38.833 [2024-07-23 18:43:38.760478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:38.833 [2024-07-23 18:43:38.760504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:38.833 [2024-07-23 18:43:38.760513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:25:38.833 [2024-07-23 18:43:38.760520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:25:38.833 [2024-07-23 18:43:38.760527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:38.833 [2024-07-23 18:43:38.760546] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:25:38.833 [2024-07-23 18:43:38.760566] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:25:38.833 [2024-07-23 18:43:38.760612] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:25:38.833 [2024-07-23 18:43:38.760628] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x168 bytes 00:25:38.833 [2024-07-23 18:43:38.760706] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:25:38.833 [2024-07-23 18:43:38.760717] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:25:38.833 [2024-07-23 18:43:38.760726] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:25:38.833 [2024-07-23 18:43:38.760735] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:25:38.833 [2024-07-23 18:43:38.760744] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:25:38.833 [2024-07-23 18:43:38.760752] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:25:38.833 [2024-07-23 18:43:38.760761] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:25:38.833 [2024-07-23 18:43:38.760768] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:25:38.833 [2024-07-23 18:43:38.760776] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:25:38.833 [2024-07-23 18:43:38.760786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:38.833 [2024-07-23 18:43:38.760794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:25:38.833 [2024-07-23 18:43:38.760802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.243 ms 00:25:38.833 [2024-07-23 18:43:38.760808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:38.833 [2024-07-23 18:43:38.760870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:38.833 [2024-07-23 18:43:38.760881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:25:38.833 [2024-07-23 18:43:38.760888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.048 ms 00:25:38.833 [2024-07-23 18:43:38.760895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:38.833 [2024-07-23 18:43:38.760990] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:25:38.833 [2024-07-23 18:43:38.761003] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:25:38.833 [2024-07-23 18:43:38.761011] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:25:38.833 [2024-07-23 18:43:38.761017] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:38.833 [2024-07-23 18:43:38.761028] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:25:38.833 [2024-07-23 18:43:38.761034] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:25:38.833 [2024-07-23 18:43:38.761041] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:25:38.833 [2024-07-23 18:43:38.761047] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:25:38.833 [2024-07-23 18:43:38.761053] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:25:38.833 [2024-07-23 18:43:38.761058] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:38.833 [2024-07-23 18:43:38.761064] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:25:38.833 [2024-07-23 18:43:38.761071] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:25:38.833 [2024-07-23 18:43:38.761076] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:38.833 [2024-07-23 18:43:38.761082] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:25:38.833 [2024-07-23 18:43:38.761088] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:25:38.833 [2024-07-23 18:43:38.761093] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:38.833 [2024-07-23 18:43:38.761099] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:25:38.833 [2024-07-23 18:43:38.761105] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:25:38.833 [2024-07-23 18:43:38.761110] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:38.833 [2024-07-23 18:43:38.761116] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:25:38.833 [2024-07-23 18:43:38.761123] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:25:38.833 [2024-07-23 18:43:38.761129] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:38.833 [2024-07-23 18:43:38.761134] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:25:38.833 [2024-07-23 18:43:38.761140] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:25:38.833 [2024-07-23 18:43:38.761145] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:38.833 [2024-07-23 18:43:38.761152] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:25:38.833 [2024-07-23 18:43:38.761157] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:25:38.833 [2024-07-23 18:43:38.761162] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:38.833 [2024-07-23 18:43:38.761168] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:25:38.833 [2024-07-23 18:43:38.761176] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:25:38.833 [2024-07-23 18:43:38.761181] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:38.833 [2024-07-23 18:43:38.761186] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:25:38.833 [2024-07-23 18:43:38.761192] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:25:38.833 [2024-07-23 18:43:38.761197] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:38.833 [2024-07-23 18:43:38.761202] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:25:38.833 [2024-07-23 18:43:38.761208] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:25:38.833 [2024-07-23 18:43:38.761217] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:38.833 [2024-07-23 18:43:38.761223] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:25:38.833 [2024-07-23 18:43:38.761229] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:25:38.833 [2024-07-23 18:43:38.761234] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:38.833 [2024-07-23 18:43:38.761239] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:25:38.833 [2024-07-23 18:43:38.761244] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:25:38.833 [2024-07-23 18:43:38.761250] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:38.833 [2024-07-23 18:43:38.761255] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:25:38.833 [2024-07-23 18:43:38.761262] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:25:38.833 [2024-07-23 18:43:38.761268] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:25:38.833 [2024-07-23 18:43:38.761274] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:38.833 [2024-07-23 18:43:38.761280] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:25:38.833 [2024-07-23 18:43:38.761286] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:25:38.833 [2024-07-23 18:43:38.761292] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:25:38.833 [2024-07-23 18:43:38.761297] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:25:38.833 [2024-07-23 18:43:38.761302] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:25:38.833 [2024-07-23 18:43:38.761314] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:25:38.833 [2024-07-23 18:43:38.761321] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:25:38.833 [2024-07-23 18:43:38.761330] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:38.833 [2024-07-23 18:43:38.761337] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:25:38.833 [2024-07-23 18:43:38.761345] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:25:38.833 [2024-07-23 18:43:38.761352] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:25:38.833 [2024-07-23 18:43:38.761358] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:25:38.833 [2024-07-23 18:43:38.761364] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:25:38.833 [2024-07-23 18:43:38.761370] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:25:38.833 [2024-07-23 18:43:38.761378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:25:38.833 [2024-07-23 18:43:38.761384] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:25:38.833 [2024-07-23 18:43:38.761391] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:25:38.833 [2024-07-23 18:43:38.761396] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:25:38.833 [2024-07-23 18:43:38.761402] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:25:38.833 [2024-07-23 18:43:38.761408] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:25:38.834 [2024-07-23 18:43:38.761415] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:25:38.834 [2024-07-23 18:43:38.761424] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:25:38.834 [2024-07-23 18:43:38.761431] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:25:38.834 [2024-07-23 18:43:38.761447] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:38.834 [2024-07-23 18:43:38.761458] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:38.834 [2024-07-23 18:43:38.761464] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:25:38.834 [2024-07-23 18:43:38.761470] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:25:38.834 [2024-07-23 18:43:38.761477] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:25:38.834 [2024-07-23 18:43:38.761484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:38.834 [2024-07-23 18:43:38.761491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:25:38.834 [2024-07-23 18:43:38.761498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.552 ms 00:25:38.834 [2024-07-23 18:43:38.761505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:38.834 [2024-07-23 18:43:38.778206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:38.834 [2024-07-23 18:43:38.778244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:25:38.834 [2024-07-23 18:43:38.778255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.679 ms 00:25:38.834 [2024-07-23 18:43:38.778263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:38.834 [2024-07-23 18:43:38.778306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:38.834 [2024-07-23 18:43:38.778320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:25:38.834 [2024-07-23 18:43:38.778328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:25:38.834 [2024-07-23 18:43:38.778335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:38.834 [2024-07-23 18:43:38.794846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:38.834 [2024-07-23 18:43:38.794878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:25:38.834 [2024-07-23 18:43:38.794892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.478 ms 00:25:38.834 [2024-07-23 18:43:38.794900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:38.834 [2024-07-23 18:43:38.794945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:38.834 [2024-07-23 18:43:38.794953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:25:38.834 [2024-07-23 18:43:38.794960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:25:38.834 [2024-07-23 18:43:38.794967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:38.834 [2024-07-23 18:43:38.795052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:38.834 [2024-07-23 18:43:38.795062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:25:38.834 [2024-07-23 18:43:38.795070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:25:38.834 [2024-07-23 18:43:38.795087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:38.834 [2024-07-23 18:43:38.795132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:38.834 [2024-07-23 18:43:38.795145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:25:38.834 [2024-07-23 18:43:38.795153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:25:38.834 [2024-07-23 18:43:38.795159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:38.834 [2024-07-23 18:43:38.807123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:38.834 [2024-07-23 18:43:38.807157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:25:38.834 [2024-07-23 18:43:38.807167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.965 ms 00:25:38.834 [2024-07-23 18:43:38.807174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:38.834 [2024-07-23 18:43:38.807297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:38.834 [2024-07-23 18:43:38.807309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:25:38.834 [2024-07-23 18:43:38.807318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:25:38.834 [2024-07-23 18:43:38.807325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:38.834 [2024-07-23 18:43:38.823987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:38.834 [2024-07-23 18:43:38.824028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:25:38.834 [2024-07-23 18:43:38.824042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.677 ms 00:25:38.834 [2024-07-23 18:43:38.824058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:38.834 [2024-07-23 18:43:38.825175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:38.834 [2024-07-23 18:43:38.825212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:25:38.834 [2024-07-23 18:43:38.825225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.269 ms 00:25:38.834 [2024-07-23 18:43:38.825234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:38.834 [2024-07-23 18:43:38.853417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:38.834 [2024-07-23 18:43:38.853489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:25:38.834 [2024-07-23 18:43:38.853505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 28.197 ms 00:25:38.834 [2024-07-23 18:43:38.853513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:38.834 [2024-07-23 18:43:38.853733] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:25:38.834 [2024-07-23 18:43:38.853887] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:25:38.834 [2024-07-23 18:43:38.854012] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:25:38.834 [2024-07-23 18:43:38.854134] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:25:38.834 [2024-07-23 18:43:38.854149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:38.834 [2024-07-23 18:43:38.854161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:25:38.834 [2024-07-23 18:43:38.854169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.578 ms 00:25:38.834 [2024-07-23 18:43:38.854177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:38.834 [2024-07-23 18:43:38.854239] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:25:38.834 [2024-07-23 18:43:38.854265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:38.834 [2024-07-23 18:43:38.854283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:25:38.834 [2024-07-23 18:43:38.854292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:25:38.834 [2024-07-23 18:43:38.854299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:38.834 [2024-07-23 18:43:38.857448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:38.834 [2024-07-23 18:43:38.857483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:25:38.834 [2024-07-23 18:43:38.857497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.131 ms 00:25:38.834 [2024-07-23 18:43:38.857507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:38.834 [2024-07-23 18:43:38.858134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:38.834 [2024-07-23 18:43:38.858162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:25:38.834 [2024-07-23 18:43:38.858179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:25:38.834 [2024-07-23 18:43:38.858186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:38.834 [2024-07-23 18:43:38.858554] ftl_nv_cache.c:2471:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:25:39.435 [2024-07-23 18:43:39.405562] ftl_nv_cache.c:2408:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:25:39.435 [2024-07-23 18:43:39.405772] ftl_nv_cache.c:2471:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:25:40.003 [2024-07-23 18:43:39.933801] ftl_nv_cache.c:2408:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:25:40.003 [2024-07-23 18:43:39.933955] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:40.004 [2024-07-23 18:43:39.933975] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:25:40.004 [2024-07-23 18:43:39.933990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:40.004 [2024-07-23 18:43:39.934013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:25:40.004 [2024-07-23 18:43:39.934028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1077.825 ms 00:25:40.004 [2024-07-23 18:43:39.934037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:40.004 [2024-07-23 18:43:39.934079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:40.004 [2024-07-23 18:43:39.934090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:25:40.004 [2024-07-23 18:43:39.934100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:25:40.004 [2024-07-23 18:43:39.934115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:40.004 [2024-07-23 18:43:39.942569] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:25:40.004 [2024-07-23 18:43:39.942681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:40.004 [2024-07-23 18:43:39.942693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:25:40.004 [2024-07-23 18:43:39.942703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.564 ms 00:25:40.004 [2024-07-23 18:43:39.942711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:40.004 [2024-07-23 18:43:39.943350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:40.004 [2024-07-23 18:43:39.943363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:25:40.004 [2024-07-23 18:43:39.943372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.566 ms 00:25:40.004 [2024-07-23 18:43:39.943380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:40.004 [2024-07-23 18:43:39.945448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:40.004 [2024-07-23 18:43:39.945472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:25:40.004 [2024-07-23 18:43:39.945482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.055 ms 00:25:40.004 [2024-07-23 18:43:39.945491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:40.004 [2024-07-23 18:43:39.945535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:40.004 [2024-07-23 18:43:39.945550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:25:40.004 [2024-07-23 18:43:39.945559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:25:40.004 [2024-07-23 18:43:39.945585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:40.004 [2024-07-23 18:43:39.945695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:40.004 [2024-07-23 18:43:39.945706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:25:40.004 [2024-07-23 18:43:39.945715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:25:40.004 [2024-07-23 18:43:39.945723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:40.004 [2024-07-23 18:43:39.945745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:40.004 [2024-07-23 18:43:39.945754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:25:40.004 [2024-07-23 18:43:39.945765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:25:40.004 [2024-07-23 18:43:39.945774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:40.004 [2024-07-23 18:43:39.945807] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:25:40.004 [2024-07-23 18:43:39.945832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:40.004 [2024-07-23 18:43:39.945840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:25:40.004 [2024-07-23 18:43:39.945849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:25:40.004 [2024-07-23 18:43:39.945857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:40.004 [2024-07-23 18:43:39.945921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:40.004 [2024-07-23 18:43:39.945930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:25:40.004 [2024-07-23 18:43:39.945938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:25:40.004 [2024-07-23 18:43:39.945950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:40.004 [2024-07-23 18:43:39.947163] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1197.786 ms, result 0 00:25:40.004 [2024-07-23 18:43:39.959504] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:40.004 [2024-07-23 18:43:39.975493] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:25:40.004 [2024-07-23 18:43:39.983623] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:25:40.572 Validate MD5 checksum, iteration 1 00:25:40.572 18:43:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:40.572 18:43:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # return 0 00:25:40.572 18:43:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:25:40.572 18:43:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:25:40.572 18:43:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:25:40.572 18:43:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:25:40.572 18:43:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:25:40.572 18:43:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:25:40.572 18:43:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:25:40.572 18:43:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:25:40.572 18:43:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:40.572 18:43:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:40.572 18:43:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:40.572 18:43:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:25:40.572 18:43:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:25:40.572 [2024-07-23 18:43:40.565922] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:25:40.572 [2024-07-23 18:43:40.566141] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94728 ] 00:25:40.831 [2024-07-23 18:43:40.712153] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:40.831 [2024-07-23 18:43:40.757887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:43.710  Copying: 589/1024 [MB] (589 MBps) Copying: 1024/1024 [MB] (average 574 MBps) 00:25:43.710 00:25:43.710 18:43:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:25:43.710 18:43:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:25:45.631 18:43:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:25:45.631 18:43:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=e357908d06b454e8392d25d79acb1556 00:25:45.631 18:43:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ e357908d06b454e8392d25d79acb1556 != \e\3\5\7\9\0\8\d\0\6\b\4\5\4\e\8\3\9\2\d\2\5\d\7\9\a\c\b\1\5\5\6 ]] 00:25:45.631 18:43:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:25:45.631 18:43:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:25:45.631 18:43:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:25:45.631 Validate MD5 checksum, iteration 2 00:25:45.631 18:43:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:25:45.631 18:43:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:45.631 18:43:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:45.631 18:43:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:45.631 18:43:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:25:45.631 18:43:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:25:45.631 [2024-07-23 18:43:45.492957] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:25:45.631 [2024-07-23 18:43:45.493137] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94779 ] 00:25:45.631 [2024-07-23 18:43:45.657328] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:45.889 [2024-07-23 18:43:45.701211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:48.399  Copying: 604/1024 [MB] (604 MBps) Copying: 1024/1024 [MB] (average 585 MBps) 00:25:48.399 00:25:48.399 18:43:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:25:48.399 18:43:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:25:50.300 18:43:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:25:50.300 18:43:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=756e3cc2fdc4bb22397aed0eaacdf0b3 00:25:50.300 18:43:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 756e3cc2fdc4bb22397aed0eaacdf0b3 != \7\5\6\e\3\c\c\2\f\d\c\4\b\b\2\2\3\9\7\a\e\d\0\e\a\a\c\d\f\0\b\3 ]] 00:25:50.300 18:43:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:25:50.300 18:43:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:25:50.300 18:43:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:25:50.300 18:43:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:25:50.300 18:43:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:25:50.300 18:43:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:25:50.300 18:43:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:25:50.300 18:43:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:25:50.300 18:43:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:25:50.300 18:43:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:25:50.300 18:43:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 94693 ]] 00:25:50.300 18:43:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 94693 00:25:50.300 18:43:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@946 -- # '[' -z 94693 ']' 00:25:50.300 18:43:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # kill -0 94693 00:25:50.300 18:43:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@951 -- # uname 00:25:50.300 18:43:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:50.300 18:43:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 94693 00:25:50.300 18:43:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:50.300 18:43:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:50.300 18:43:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # echo 'killing process with pid 94693' 00:25:50.300 killing process with pid 94693 00:25:50.300 18:43:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@965 -- # kill 94693 00:25:50.300 18:43:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@970 -- # wait 94693 00:25:50.559 [2024-07-23 18:43:50.511404] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:25:50.559 [2024-07-23 18:43:50.517058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:50.559 [2024-07-23 18:43:50.517139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:25:50.559 [2024-07-23 18:43:50.517172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:25:50.559 [2024-07-23 18:43:50.517193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.559 [2024-07-23 18:43:50.517230] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:25:50.559 [2024-07-23 18:43:50.518495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:50.559 [2024-07-23 18:43:50.518551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:25:50.559 [2024-07-23 18:43:50.518592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.233 ms 00:25:50.559 [2024-07-23 18:43:50.518613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.559 [2024-07-23 18:43:50.518854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:50.559 [2024-07-23 18:43:50.518896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:25:50.559 [2024-07-23 18:43:50.518924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.204 ms 00:25:50.559 [2024-07-23 18:43:50.518960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.559 [2024-07-23 18:43:50.520241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:50.559 [2024-07-23 18:43:50.520306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:25:50.559 [2024-07-23 18:43:50.520333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.252 ms 00:25:50.559 [2024-07-23 18:43:50.520354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.559 [2024-07-23 18:43:50.521346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:50.559 [2024-07-23 18:43:50.521411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:25:50.559 [2024-07-23 18:43:50.521436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.945 ms 00:25:50.559 [2024-07-23 18:43:50.521462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.559 [2024-07-23 18:43:50.522993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:50.559 [2024-07-23 18:43:50.523054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:25:50.559 [2024-07-23 18:43:50.523082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.459 ms 00:25:50.559 [2024-07-23 18:43:50.523103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.559 [2024-07-23 18:43:50.524574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:50.559 [2024-07-23 18:43:50.524647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:25:50.559 [2024-07-23 18:43:50.524675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.429 ms 00:25:50.559 [2024-07-23 18:43:50.524712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.559 [2024-07-23 18:43:50.524806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:50.559 [2024-07-23 18:43:50.524841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:25:50.559 [2024-07-23 18:43:50.524870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:25:50.559 [2024-07-23 18:43:50.524890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.559 [2024-07-23 18:43:50.526353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:50.559 [2024-07-23 18:43:50.526405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:25:50.559 [2024-07-23 18:43:50.526428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.434 ms 00:25:50.560 [2024-07-23 18:43:50.526447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.560 [2024-07-23 18:43:50.527805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:50.560 [2024-07-23 18:43:50.527853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:25:50.560 [2024-07-23 18:43:50.527875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.318 ms 00:25:50.560 [2024-07-23 18:43:50.527894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.560 [2024-07-23 18:43:50.528985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:50.560 [2024-07-23 18:43:50.529033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:25:50.560 [2024-07-23 18:43:50.529055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.051 ms 00:25:50.560 [2024-07-23 18:43:50.529075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.560 [2024-07-23 18:43:50.530193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:50.560 [2024-07-23 18:43:50.530240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:25:50.560 [2024-07-23 18:43:50.530262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.047 ms 00:25:50.560 [2024-07-23 18:43:50.530281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.560 [2024-07-23 18:43:50.530320] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:25:50.560 [2024-07-23 18:43:50.530362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:25:50.560 [2024-07-23 18:43:50.530394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:25:50.560 [2024-07-23 18:43:50.530424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:25:50.560 [2024-07-23 18:43:50.530453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:50.560 [2024-07-23 18:43:50.530481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:50.560 [2024-07-23 18:43:50.530503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:50.560 [2024-07-23 18:43:50.530511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:50.560 [2024-07-23 18:43:50.530519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:50.560 [2024-07-23 18:43:50.530527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:50.560 [2024-07-23 18:43:50.530537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:50.560 [2024-07-23 18:43:50.530545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:50.560 [2024-07-23 18:43:50.530553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:50.560 [2024-07-23 18:43:50.530560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:50.560 [2024-07-23 18:43:50.530579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:50.560 [2024-07-23 18:43:50.530587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:50.560 [2024-07-23 18:43:50.530595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:50.560 [2024-07-23 18:43:50.530604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:50.560 [2024-07-23 18:43:50.530612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:50.560 [2024-07-23 18:43:50.530633] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:25:50.560 [2024-07-23 18:43:50.530644] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 6ed67cd7-2f04-4189-9459-a207b87110cf 00:25:50.560 [2024-07-23 18:43:50.530653] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:25:50.560 [2024-07-23 18:43:50.530663] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:25:50.560 [2024-07-23 18:43:50.530670] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:25:50.560 [2024-07-23 18:43:50.530679] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:25:50.560 [2024-07-23 18:43:50.530686] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:25:50.560 [2024-07-23 18:43:50.530694] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:25:50.560 [2024-07-23 18:43:50.530701] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:25:50.560 [2024-07-23 18:43:50.530708] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:25:50.560 [2024-07-23 18:43:50.530715] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:25:50.560 [2024-07-23 18:43:50.530725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:50.560 [2024-07-23 18:43:50.530734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:25:50.560 [2024-07-23 18:43:50.530743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.407 ms 00:25:50.560 [2024-07-23 18:43:50.530750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.560 [2024-07-23 18:43:50.533680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:50.560 [2024-07-23 18:43:50.533712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:25:50.560 [2024-07-23 18:43:50.533737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.893 ms 00:25:50.560 [2024-07-23 18:43:50.533756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.560 [2024-07-23 18:43:50.533946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:50.560 [2024-07-23 18:43:50.533968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:25:50.560 [2024-07-23 18:43:50.533987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.157 ms 00:25:50.560 [2024-07-23 18:43:50.534006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.560 [2024-07-23 18:43:50.544806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:50.560 [2024-07-23 18:43:50.544856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:25:50.560 [2024-07-23 18:43:50.544879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:50.560 [2024-07-23 18:43:50.544898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.560 [2024-07-23 18:43:50.544942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:50.560 [2024-07-23 18:43:50.544963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:25:50.560 [2024-07-23 18:43:50.544982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:50.560 [2024-07-23 18:43:50.545001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.560 [2024-07-23 18:43:50.545096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:50.560 [2024-07-23 18:43:50.545123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:25:50.560 [2024-07-23 18:43:50.545143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:50.560 [2024-07-23 18:43:50.545161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.560 [2024-07-23 18:43:50.545193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:50.560 [2024-07-23 18:43:50.545213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:25:50.560 [2024-07-23 18:43:50.545232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:50.560 [2024-07-23 18:43:50.545250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.560 [2024-07-23 18:43:50.569388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:50.560 [2024-07-23 18:43:50.569482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:25:50.560 [2024-07-23 18:43:50.569509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:50.560 [2024-07-23 18:43:50.569530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.560 [2024-07-23 18:43:50.583430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:50.560 [2024-07-23 18:43:50.583507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:25:50.560 [2024-07-23 18:43:50.583547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:50.560 [2024-07-23 18:43:50.583580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.560 [2024-07-23 18:43:50.583679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:50.560 [2024-07-23 18:43:50.583726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:25:50.560 [2024-07-23 18:43:50.583755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:50.560 [2024-07-23 18:43:50.583776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.560 [2024-07-23 18:43:50.583848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:50.560 [2024-07-23 18:43:50.583881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:25:50.560 [2024-07-23 18:43:50.583926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:50.560 [2024-07-23 18:43:50.583947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.560 [2024-07-23 18:43:50.584066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:50.560 [2024-07-23 18:43:50.584108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:25:50.560 [2024-07-23 18:43:50.584129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:50.560 [2024-07-23 18:43:50.584149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.560 [2024-07-23 18:43:50.584205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:50.560 [2024-07-23 18:43:50.584243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:25:50.560 [2024-07-23 18:43:50.584262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:50.560 [2024-07-23 18:43:50.584281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.560 [2024-07-23 18:43:50.584338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:50.560 [2024-07-23 18:43:50.584359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:25:50.560 [2024-07-23 18:43:50.584382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:50.560 [2024-07-23 18:43:50.584400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.560 [2024-07-23 18:43:50.584472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:50.560 [2024-07-23 18:43:50.584498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:25:50.560 [2024-07-23 18:43:50.584518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:50.560 [2024-07-23 18:43:50.584536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.560 [2024-07-23 18:43:50.584718] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 67.740 ms, result 0 00:25:53.093 18:43:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:25:53.093 18:43:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:53.093 18:43:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:25:53.093 18:43:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:25:53.093 18:43:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:25:53.093 18:43:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:53.093 Remove shared memory files 00:25:53.093 18:43:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:25:53.093 18:43:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:25:53.093 18:43:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:25:53.093 18:43:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:25:53.093 18:43:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid94512 00:25:53.093 18:43:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:25:53.093 18:43:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:25:53.093 ************************************ 00:25:53.093 END TEST ftl_upgrade_shutdown 00:25:53.093 ************************************ 00:25:53.093 00:25:53.093 real 1m11.762s 00:25:53.093 user 1m32.140s 00:25:53.093 sys 0m22.611s 00:25:53.093 18:43:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:53.093 18:43:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:53.093 18:43:52 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:25:53.093 18:43:52 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:25:53.093 18:43:52 ftl -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:25:53.093 18:43:52 ftl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:53.093 18:43:52 ftl -- common/autotest_common.sh@10 -- # set +x 00:25:53.093 ************************************ 00:25:53.093 START TEST ftl_restore_fast 00:25:53.093 ************************************ 00:25:53.093 18:43:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:25:53.093 * Looking for test storage... 00:25:53.093 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:25:53.093 18:43:53 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:25:53.093 18:43:53 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:25:53.094 18:43:53 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:25:53.094 18:43:53 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:25:53.094 18:43:53 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:25:53.094 18:43:53 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:53.094 18:43:53 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:53.094 18:43:53 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:25:53.094 18:43:53 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:25:53.094 18:43:53 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:53.094 18:43:53 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:53.094 18:43:53 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:25:53.094 18:43:53 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:25:53.094 18:43:53 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:53.094 18:43:53 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:53.094 18:43:53 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:25:53.094 18:43:53 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:25:53.094 18:43:53 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:53.094 18:43:53 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:53.094 18:43:53 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:25:53.094 18:43:53 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:25:53.094 18:43:53 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:53.094 18:43:53 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:53.094 18:43:53 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:53.094 18:43:53 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:53.094 18:43:53 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:25:53.094 18:43:53 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:25:53.094 18:43:53 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:53.094 18:43:53 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:53.094 18:43:53 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:53.094 18:43:53 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:25:53.094 18:43:53 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.hdMuXZIiDT 00:25:53.094 18:43:53 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:25:53.094 18:43:53 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:25:53.094 18:43:53 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:25:53.094 18:43:53 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:25:53.094 18:43:53 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:25:53.094 18:43:53 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:25:53.094 18:43:53 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:25:53.094 18:43:53 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:25:53.094 18:43:53 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:25:53.094 18:43:53 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:25:53.094 18:43:53 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:25:53.094 18:43:53 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:53.094 18:43:53 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=94927 00:25:53.094 18:43:53 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 94927 00:25:53.094 18:43:53 ftl.ftl_restore_fast -- common/autotest_common.sh@827 -- # '[' -z 94927 ']' 00:25:53.094 18:43:53 ftl.ftl_restore_fast -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:53.094 18:43:53 ftl.ftl_restore_fast -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:53.094 18:43:53 ftl.ftl_restore_fast -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:53.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:53.094 18:43:53 ftl.ftl_restore_fast -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:53.094 18:43:53 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:25:53.353 [2024-07-23 18:43:53.196548] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:25:53.353 [2024-07-23 18:43:53.196800] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94927 ] 00:25:53.353 [2024-07-23 18:43:53.345003] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:53.610 [2024-07-23 18:43:53.416592] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:54.176 18:43:53 ftl.ftl_restore_fast -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:54.176 18:43:53 ftl.ftl_restore_fast -- common/autotest_common.sh@860 -- # return 0 00:25:54.176 18:43:54 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:25:54.176 18:43:54 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:25:54.176 18:43:54 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:25:54.176 18:43:54 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:25:54.176 18:43:54 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:25:54.176 18:43:54 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:25:54.433 18:43:54 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:25:54.433 18:43:54 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:25:54.433 18:43:54 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:25:54.433 18:43:54 ftl.ftl_restore_fast -- common/autotest_common.sh@1374 -- # local bdev_name=nvme0n1 00:25:54.433 18:43:54 ftl.ftl_restore_fast -- common/autotest_common.sh@1375 -- # local bdev_info 00:25:54.433 18:43:54 ftl.ftl_restore_fast -- common/autotest_common.sh@1376 -- # local bs 00:25:54.433 18:43:54 ftl.ftl_restore_fast -- common/autotest_common.sh@1377 -- # local nb 00:25:54.433 18:43:54 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:25:54.433 18:43:54 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:25:54.433 { 00:25:54.433 "name": "nvme0n1", 00:25:54.433 "aliases": [ 00:25:54.433 "6c23680a-fae7-427d-b652-fe311c2d75ea" 00:25:54.433 ], 00:25:54.433 "product_name": "NVMe disk", 00:25:54.433 "block_size": 4096, 00:25:54.433 "num_blocks": 1310720, 00:25:54.433 "uuid": "6c23680a-fae7-427d-b652-fe311c2d75ea", 00:25:54.433 "assigned_rate_limits": { 00:25:54.433 "rw_ios_per_sec": 0, 00:25:54.433 "rw_mbytes_per_sec": 0, 00:25:54.433 "r_mbytes_per_sec": 0, 00:25:54.433 "w_mbytes_per_sec": 0 00:25:54.433 }, 00:25:54.433 "claimed": true, 00:25:54.433 "claim_type": "read_many_write_one", 00:25:54.433 "zoned": false, 00:25:54.433 "supported_io_types": { 00:25:54.433 "read": true, 00:25:54.433 "write": true, 00:25:54.433 "unmap": true, 00:25:54.433 "write_zeroes": true, 00:25:54.433 "flush": true, 00:25:54.433 "reset": true, 00:25:54.433 "compare": true, 00:25:54.433 "compare_and_write": false, 00:25:54.433 "abort": true, 00:25:54.433 "nvme_admin": true, 00:25:54.433 "nvme_io": true 00:25:54.433 }, 00:25:54.433 "driver_specific": { 00:25:54.433 "nvme": [ 00:25:54.433 { 00:25:54.433 "pci_address": "0000:00:11.0", 00:25:54.433 "trid": { 00:25:54.433 "trtype": "PCIe", 00:25:54.433 "traddr": "0000:00:11.0" 00:25:54.433 }, 00:25:54.433 "ctrlr_data": { 00:25:54.433 "cntlid": 0, 00:25:54.433 "vendor_id": "0x1b36", 00:25:54.433 "model_number": "QEMU NVMe Ctrl", 00:25:54.433 "serial_number": "12341", 00:25:54.433 "firmware_revision": "8.0.0", 00:25:54.433 "subnqn": "nqn.2019-08.org.qemu:12341", 00:25:54.433 "oacs": { 00:25:54.433 "security": 0, 00:25:54.433 "format": 1, 00:25:54.433 "firmware": 0, 00:25:54.433 "ns_manage": 1 00:25:54.433 }, 00:25:54.433 "multi_ctrlr": false, 00:25:54.433 "ana_reporting": false 00:25:54.433 }, 00:25:54.433 "vs": { 00:25:54.433 "nvme_version": "1.4" 00:25:54.433 }, 00:25:54.433 "ns_data": { 00:25:54.433 "id": 1, 00:25:54.433 "can_share": false 00:25:54.433 } 00:25:54.433 } 00:25:54.433 ], 00:25:54.433 "mp_policy": "active_passive" 00:25:54.433 } 00:25:54.433 } 00:25:54.433 ]' 00:25:54.433 18:43:54 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:25:54.433 18:43:54 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # bs=4096 00:25:54.691 18:43:54 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:25:54.691 18:43:54 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # nb=1310720 00:25:54.691 18:43:54 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bdev_size=5120 00:25:54.691 18:43:54 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # echo 5120 00:25:54.691 18:43:54 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:25:54.691 18:43:54 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:25:54.691 18:43:54 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:25:54.691 18:43:54 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:54.691 18:43:54 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:25:54.691 18:43:54 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=12262268-3ef1-4d4b-abd9-570247e98b44 00:25:54.691 18:43:54 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:25:54.691 18:43:54 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 12262268-3ef1-4d4b-abd9-570247e98b44 00:25:54.949 18:43:54 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:25:55.206 18:43:55 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=2285af2d-f535-4d00-8e74-466dec12a3fb 00:25:55.206 18:43:55 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 2285af2d-f535-4d00-8e74-466dec12a3fb 00:25:55.464 18:43:55 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=3ebb2e36-7a2a-4721-8248-fa49d1288b8c 00:25:55.464 18:43:55 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:25:55.464 18:43:55 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 3ebb2e36-7a2a-4721-8248-fa49d1288b8c 00:25:55.464 18:43:55 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:25:55.464 18:43:55 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:25:55.464 18:43:55 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=3ebb2e36-7a2a-4721-8248-fa49d1288b8c 00:25:55.464 18:43:55 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:25:55.464 18:43:55 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size 3ebb2e36-7a2a-4721-8248-fa49d1288b8c 00:25:55.464 18:43:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1374 -- # local bdev_name=3ebb2e36-7a2a-4721-8248-fa49d1288b8c 00:25:55.464 18:43:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1375 -- # local bdev_info 00:25:55.464 18:43:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1376 -- # local bs 00:25:55.464 18:43:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1377 -- # local nb 00:25:55.464 18:43:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3ebb2e36-7a2a-4721-8248-fa49d1288b8c 00:25:55.464 18:43:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:25:55.464 { 00:25:55.464 "name": "3ebb2e36-7a2a-4721-8248-fa49d1288b8c", 00:25:55.464 "aliases": [ 00:25:55.464 "lvs/nvme0n1p0" 00:25:55.464 ], 00:25:55.464 "product_name": "Logical Volume", 00:25:55.464 "block_size": 4096, 00:25:55.464 "num_blocks": 26476544, 00:25:55.464 "uuid": "3ebb2e36-7a2a-4721-8248-fa49d1288b8c", 00:25:55.464 "assigned_rate_limits": { 00:25:55.464 "rw_ios_per_sec": 0, 00:25:55.464 "rw_mbytes_per_sec": 0, 00:25:55.464 "r_mbytes_per_sec": 0, 00:25:55.464 "w_mbytes_per_sec": 0 00:25:55.464 }, 00:25:55.464 "claimed": false, 00:25:55.464 "zoned": false, 00:25:55.464 "supported_io_types": { 00:25:55.464 "read": true, 00:25:55.464 "write": true, 00:25:55.464 "unmap": true, 00:25:55.464 "write_zeroes": true, 00:25:55.464 "flush": false, 00:25:55.464 "reset": true, 00:25:55.464 "compare": false, 00:25:55.464 "compare_and_write": false, 00:25:55.464 "abort": false, 00:25:55.464 "nvme_admin": false, 00:25:55.464 "nvme_io": false 00:25:55.464 }, 00:25:55.464 "driver_specific": { 00:25:55.464 "lvol": { 00:25:55.464 "lvol_store_uuid": "2285af2d-f535-4d00-8e74-466dec12a3fb", 00:25:55.464 "base_bdev": "nvme0n1", 00:25:55.464 "thin_provision": true, 00:25:55.464 "num_allocated_clusters": 0, 00:25:55.464 "snapshot": false, 00:25:55.464 "clone": false, 00:25:55.464 "esnap_clone": false 00:25:55.464 } 00:25:55.464 } 00:25:55.464 } 00:25:55.464 ]' 00:25:55.464 18:43:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:25:55.464 18:43:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # bs=4096 00:25:55.464 18:43:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:25:55.722 18:43:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # nb=26476544 00:25:55.722 18:43:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:25:55.722 18:43:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # echo 103424 00:25:55.722 18:43:55 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:25:55.722 18:43:55 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:25:55.722 18:43:55 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:25:55.980 18:43:55 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:25:55.980 18:43:55 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:25:55.980 18:43:55 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size 3ebb2e36-7a2a-4721-8248-fa49d1288b8c 00:25:55.980 18:43:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1374 -- # local bdev_name=3ebb2e36-7a2a-4721-8248-fa49d1288b8c 00:25:55.980 18:43:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1375 -- # local bdev_info 00:25:55.980 18:43:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1376 -- # local bs 00:25:55.980 18:43:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1377 -- # local nb 00:25:55.980 18:43:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3ebb2e36-7a2a-4721-8248-fa49d1288b8c 00:25:55.980 18:43:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:25:55.980 { 00:25:55.980 "name": "3ebb2e36-7a2a-4721-8248-fa49d1288b8c", 00:25:55.980 "aliases": [ 00:25:55.980 "lvs/nvme0n1p0" 00:25:55.980 ], 00:25:55.980 "product_name": "Logical Volume", 00:25:55.980 "block_size": 4096, 00:25:55.980 "num_blocks": 26476544, 00:25:55.980 "uuid": "3ebb2e36-7a2a-4721-8248-fa49d1288b8c", 00:25:55.980 "assigned_rate_limits": { 00:25:55.980 "rw_ios_per_sec": 0, 00:25:55.980 "rw_mbytes_per_sec": 0, 00:25:55.980 "r_mbytes_per_sec": 0, 00:25:55.980 "w_mbytes_per_sec": 0 00:25:55.980 }, 00:25:55.980 "claimed": false, 00:25:55.980 "zoned": false, 00:25:55.980 "supported_io_types": { 00:25:55.980 "read": true, 00:25:55.980 "write": true, 00:25:55.980 "unmap": true, 00:25:55.980 "write_zeroes": true, 00:25:55.980 "flush": false, 00:25:55.980 "reset": true, 00:25:55.980 "compare": false, 00:25:55.980 "compare_and_write": false, 00:25:55.980 "abort": false, 00:25:55.980 "nvme_admin": false, 00:25:55.980 "nvme_io": false 00:25:55.980 }, 00:25:55.980 "driver_specific": { 00:25:55.980 "lvol": { 00:25:55.980 "lvol_store_uuid": "2285af2d-f535-4d00-8e74-466dec12a3fb", 00:25:55.980 "base_bdev": "nvme0n1", 00:25:55.980 "thin_provision": true, 00:25:55.980 "num_allocated_clusters": 0, 00:25:55.980 "snapshot": false, 00:25:55.980 "clone": false, 00:25:55.980 "esnap_clone": false 00:25:55.980 } 00:25:55.980 } 00:25:55.980 } 00:25:55.980 ]' 00:25:55.980 18:43:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:25:56.237 18:43:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # bs=4096 00:25:56.237 18:43:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:25:56.237 18:43:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # nb=26476544 00:25:56.238 18:43:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:25:56.238 18:43:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # echo 103424 00:25:56.238 18:43:56 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:25:56.238 18:43:56 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:25:56.238 18:43:56 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:25:56.238 18:43:56 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size 3ebb2e36-7a2a-4721-8248-fa49d1288b8c 00:25:56.238 18:43:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1374 -- # local bdev_name=3ebb2e36-7a2a-4721-8248-fa49d1288b8c 00:25:56.238 18:43:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1375 -- # local bdev_info 00:25:56.238 18:43:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1376 -- # local bs 00:25:56.238 18:43:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1377 -- # local nb 00:25:56.495 18:43:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3ebb2e36-7a2a-4721-8248-fa49d1288b8c 00:25:56.495 18:43:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:25:56.495 { 00:25:56.495 "name": "3ebb2e36-7a2a-4721-8248-fa49d1288b8c", 00:25:56.495 "aliases": [ 00:25:56.495 "lvs/nvme0n1p0" 00:25:56.495 ], 00:25:56.495 "product_name": "Logical Volume", 00:25:56.495 "block_size": 4096, 00:25:56.495 "num_blocks": 26476544, 00:25:56.495 "uuid": "3ebb2e36-7a2a-4721-8248-fa49d1288b8c", 00:25:56.495 "assigned_rate_limits": { 00:25:56.495 "rw_ios_per_sec": 0, 00:25:56.495 "rw_mbytes_per_sec": 0, 00:25:56.495 "r_mbytes_per_sec": 0, 00:25:56.495 "w_mbytes_per_sec": 0 00:25:56.495 }, 00:25:56.495 "claimed": false, 00:25:56.495 "zoned": false, 00:25:56.495 "supported_io_types": { 00:25:56.495 "read": true, 00:25:56.495 "write": true, 00:25:56.495 "unmap": true, 00:25:56.495 "write_zeroes": true, 00:25:56.495 "flush": false, 00:25:56.495 "reset": true, 00:25:56.495 "compare": false, 00:25:56.495 "compare_and_write": false, 00:25:56.495 "abort": false, 00:25:56.495 "nvme_admin": false, 00:25:56.495 "nvme_io": false 00:25:56.495 }, 00:25:56.495 "driver_specific": { 00:25:56.495 "lvol": { 00:25:56.495 "lvol_store_uuid": "2285af2d-f535-4d00-8e74-466dec12a3fb", 00:25:56.495 "base_bdev": "nvme0n1", 00:25:56.495 "thin_provision": true, 00:25:56.495 "num_allocated_clusters": 0, 00:25:56.495 "snapshot": false, 00:25:56.495 "clone": false, 00:25:56.495 "esnap_clone": false 00:25:56.495 } 00:25:56.495 } 00:25:56.495 } 00:25:56.495 ]' 00:25:56.495 18:43:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:25:56.495 18:43:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # bs=4096 00:25:56.495 18:43:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:25:56.755 18:43:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # nb=26476544 00:25:56.755 18:43:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:25:56.755 18:43:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # echo 103424 00:25:56.755 18:43:56 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:25:56.755 18:43:56 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 3ebb2e36-7a2a-4721-8248-fa49d1288b8c --l2p_dram_limit 10' 00:25:56.755 18:43:56 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:25:56.755 18:43:56 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:25:56.755 18:43:56 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:25:56.755 18:43:56 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:25:56.755 18:43:56 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:25:56.755 18:43:56 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 3ebb2e36-7a2a-4721-8248-fa49d1288b8c --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:25:56.755 [2024-07-23 18:43:56.733334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.755 [2024-07-23 18:43:56.733478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:56.755 [2024-07-23 18:43:56.733514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:56.755 [2024-07-23 18:43:56.733534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.755 [2024-07-23 18:43:56.733622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.755 [2024-07-23 18:43:56.733647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:56.755 [2024-07-23 18:43:56.733669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:25:56.755 [2024-07-23 18:43:56.733690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.755 [2024-07-23 18:43:56.733726] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:56.755 [2024-07-23 18:43:56.734074] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:56.755 [2024-07-23 18:43:56.734141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.755 [2024-07-23 18:43:56.734174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:56.755 [2024-07-23 18:43:56.734198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.425 ms 00:25:56.755 [2024-07-23 18:43:56.734244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.755 [2024-07-23 18:43:56.734332] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 13fadea6-e45b-447a-8eb6-e8ad4e453dcb 00:25:56.755 [2024-07-23 18:43:56.736760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.755 [2024-07-23 18:43:56.736824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:25:56.755 [2024-07-23 18:43:56.736855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:25:56.755 [2024-07-23 18:43:56.736882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.755 [2024-07-23 18:43:56.751231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.755 [2024-07-23 18:43:56.751313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:56.755 [2024-07-23 18:43:56.751342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.317 ms 00:25:56.755 [2024-07-23 18:43:56.751376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.755 [2024-07-23 18:43:56.751477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.755 [2024-07-23 18:43:56.751509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:56.755 [2024-07-23 18:43:56.751630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:25:56.755 [2024-07-23 18:43:56.751670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.755 [2024-07-23 18:43:56.751765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.755 [2024-07-23 18:43:56.751806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:56.755 [2024-07-23 18:43:56.751833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:25:56.755 [2024-07-23 18:43:56.751873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.755 [2024-07-23 18:43:56.751932] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:56.755 [2024-07-23 18:43:56.754707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.755 [2024-07-23 18:43:56.754758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:56.755 [2024-07-23 18:43:56.754791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.786 ms 00:25:56.755 [2024-07-23 18:43:56.754816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.755 [2024-07-23 18:43:56.754867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.755 [2024-07-23 18:43:56.754912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:56.755 [2024-07-23 18:43:56.754942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:56.755 [2024-07-23 18:43:56.754969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.755 [2024-07-23 18:43:56.755005] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:25:56.755 [2024-07-23 18:43:56.755184] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:56.755 [2024-07-23 18:43:56.755229] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:56.755 [2024-07-23 18:43:56.755268] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:25:56.755 [2024-07-23 18:43:56.755329] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:56.755 [2024-07-23 18:43:56.755367] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:56.755 [2024-07-23 18:43:56.755419] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:56.755 [2024-07-23 18:43:56.755462] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:56.755 [2024-07-23 18:43:56.755498] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:56.755 [2024-07-23 18:43:56.755524] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:56.755 [2024-07-23 18:43:56.755551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.755 [2024-07-23 18:43:56.755594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:56.755 [2024-07-23 18:43:56.755627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.551 ms 00:25:56.755 [2024-07-23 18:43:56.755657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.755 [2024-07-23 18:43:56.755758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.755 [2024-07-23 18:43:56.755787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:56.755 [2024-07-23 18:43:56.755823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:25:56.755 [2024-07-23 18:43:56.755849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.755 [2024-07-23 18:43:56.755972] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:56.755 [2024-07-23 18:43:56.756006] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:56.755 [2024-07-23 18:43:56.756030] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:56.755 [2024-07-23 18:43:56.756054] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:56.755 [2024-07-23 18:43:56.756077] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:56.755 [2024-07-23 18:43:56.756096] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:56.755 [2024-07-23 18:43:56.756123] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:56.755 [2024-07-23 18:43:56.756149] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:56.755 [2024-07-23 18:43:56.756172] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:56.755 [2024-07-23 18:43:56.756196] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:56.755 [2024-07-23 18:43:56.756217] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:56.755 [2024-07-23 18:43:56.756237] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:56.755 [2024-07-23 18:43:56.756284] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:56.755 [2024-07-23 18:43:56.756303] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:56.755 [2024-07-23 18:43:56.756340] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:56.755 [2024-07-23 18:43:56.756368] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:56.755 [2024-07-23 18:43:56.756389] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:56.755 [2024-07-23 18:43:56.756408] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:56.755 [2024-07-23 18:43:56.756445] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:56.755 [2024-07-23 18:43:56.756470] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:56.755 [2024-07-23 18:43:56.756492] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:56.755 [2024-07-23 18:43:56.756519] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:56.755 [2024-07-23 18:43:56.756547] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:56.755 [2024-07-23 18:43:56.756581] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:56.755 [2024-07-23 18:43:56.756611] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:56.755 [2024-07-23 18:43:56.756630] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:56.755 [2024-07-23 18:43:56.756667] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:56.755 [2024-07-23 18:43:56.756686] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:56.755 [2024-07-23 18:43:56.756709] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:56.755 [2024-07-23 18:43:56.756734] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:56.755 [2024-07-23 18:43:56.756766] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:56.755 [2024-07-23 18:43:56.756792] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:56.755 [2024-07-23 18:43:56.756814] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:56.755 [2024-07-23 18:43:56.756838] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:56.755 [2024-07-23 18:43:56.756861] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:56.756 [2024-07-23 18:43:56.756881] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:56.756 [2024-07-23 18:43:56.756913] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:56.756 [2024-07-23 18:43:56.756932] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:56.756 [2024-07-23 18:43:56.756960] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:56.756 [2024-07-23 18:43:56.756983] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:56.756 [2024-07-23 18:43:56.757004] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:56.756 [2024-07-23 18:43:56.757029] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:56.756 [2024-07-23 18:43:56.757054] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:56.756 [2024-07-23 18:43:56.757073] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:56.756 [2024-07-23 18:43:56.757100] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:56.756 [2024-07-23 18:43:56.757120] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:56.756 [2024-07-23 18:43:56.757175] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:56.756 [2024-07-23 18:43:56.757205] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:56.756 [2024-07-23 18:43:56.757234] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:56.756 [2024-07-23 18:43:56.757259] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:56.756 [2024-07-23 18:43:56.757287] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:56.756 [2024-07-23 18:43:56.757311] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:56.756 [2024-07-23 18:43:56.757341] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:56.756 [2024-07-23 18:43:56.757373] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:56.756 [2024-07-23 18:43:56.757418] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:56.756 [2024-07-23 18:43:56.757476] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:56.756 [2024-07-23 18:43:56.757519] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:56.756 [2024-07-23 18:43:56.757560] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:56.756 [2024-07-23 18:43:56.757611] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:56.756 [2024-07-23 18:43:56.757647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:56.756 [2024-07-23 18:43:56.757687] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:56.756 [2024-07-23 18:43:56.757725] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:56.756 [2024-07-23 18:43:56.757783] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:56.756 [2024-07-23 18:43:56.757819] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:56.756 [2024-07-23 18:43:56.757869] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:56.756 [2024-07-23 18:43:56.757879] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:56.756 [2024-07-23 18:43:56.757889] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:56.756 [2024-07-23 18:43:56.757897] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:56.756 [2024-07-23 18:43:56.757907] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:56.756 [2024-07-23 18:43:56.757914] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:56.756 [2024-07-23 18:43:56.757925] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:56.756 [2024-07-23 18:43:56.757940] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:56.756 [2024-07-23 18:43:56.757950] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:56.756 [2024-07-23 18:43:56.757957] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:56.756 [2024-07-23 18:43:56.757967] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:56.756 [2024-07-23 18:43:56.757975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.756 [2024-07-23 18:43:56.757986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:56.756 [2024-07-23 18:43:56.757994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.059 ms 00:25:56.756 [2024-07-23 18:43:56.758015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.756 [2024-07-23 18:43:56.758063] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:25:56.756 [2024-07-23 18:43:56.758083] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:26:00.951 [2024-07-23 18:44:00.356060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.951 [2024-07-23 18:44:00.356238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:26:00.951 [2024-07-23 18:44:00.356286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3604.930 ms 00:26:00.951 [2024-07-23 18:44:00.356316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.951 [2024-07-23 18:44:00.376567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.951 [2024-07-23 18:44:00.376718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:00.951 [2024-07-23 18:44:00.376756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.134 ms 00:26:00.951 [2024-07-23 18:44:00.376780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.951 [2024-07-23 18:44:00.376905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.951 [2024-07-23 18:44:00.376945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:00.951 [2024-07-23 18:44:00.377016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:26:00.951 [2024-07-23 18:44:00.377041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.951 [2024-07-23 18:44:00.393687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.951 [2024-07-23 18:44:00.393802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:00.951 [2024-07-23 18:44:00.393837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.575 ms 00:26:00.951 [2024-07-23 18:44:00.393860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.951 [2024-07-23 18:44:00.393919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.951 [2024-07-23 18:44:00.393955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:00.951 [2024-07-23 18:44:00.394001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:00.951 [2024-07-23 18:44:00.394086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.951 [2024-07-23 18:44:00.394975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.952 [2024-07-23 18:44:00.395026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:00.952 [2024-07-23 18:44:00.395055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.815 ms 00:26:00.952 [2024-07-23 18:44:00.395079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.952 [2024-07-23 18:44:00.395204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.952 [2024-07-23 18:44:00.395248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:00.952 [2024-07-23 18:44:00.395274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:26:00.952 [2024-07-23 18:44:00.395298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.952 [2024-07-23 18:44:00.407275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.952 [2024-07-23 18:44:00.407356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:00.952 [2024-07-23 18:44:00.407397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.961 ms 00:26:00.952 [2024-07-23 18:44:00.407420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.952 [2024-07-23 18:44:00.416144] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:00.952 [2024-07-23 18:44:00.421339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.952 [2024-07-23 18:44:00.421396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:00.952 [2024-07-23 18:44:00.421428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.823 ms 00:26:00.952 [2024-07-23 18:44:00.421448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.952 [2024-07-23 18:44:00.519799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.952 [2024-07-23 18:44:00.519959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:26:00.952 [2024-07-23 18:44:00.519999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 98.487 ms 00:26:00.952 [2024-07-23 18:44:00.520024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.952 [2024-07-23 18:44:00.520246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.952 [2024-07-23 18:44:00.520285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:00.952 [2024-07-23 18:44:00.520316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.159 ms 00:26:00.952 [2024-07-23 18:44:00.520350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.952 [2024-07-23 18:44:00.524048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.952 [2024-07-23 18:44:00.524118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:26:00.952 [2024-07-23 18:44:00.524154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.645 ms 00:26:00.952 [2024-07-23 18:44:00.524178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.952 [2024-07-23 18:44:00.527076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.952 [2024-07-23 18:44:00.527137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:26:00.952 [2024-07-23 18:44:00.527181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.844 ms 00:26:00.952 [2024-07-23 18:44:00.527207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.952 [2024-07-23 18:44:00.527511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.952 [2024-07-23 18:44:00.527591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:00.952 [2024-07-23 18:44:00.527627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.257 ms 00:26:00.952 [2024-07-23 18:44:00.527647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.952 [2024-07-23 18:44:00.571796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.952 [2024-07-23 18:44:00.571914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:26:00.952 [2024-07-23 18:44:00.571953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.162 ms 00:26:00.952 [2024-07-23 18:44:00.571979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.952 [2024-07-23 18:44:00.577296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.952 [2024-07-23 18:44:00.577371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:26:00.952 [2024-07-23 18:44:00.577402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.270 ms 00:26:00.952 [2024-07-23 18:44:00.577422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.952 [2024-07-23 18:44:00.580650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.952 [2024-07-23 18:44:00.580712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:26:00.952 [2024-07-23 18:44:00.580750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.182 ms 00:26:00.952 [2024-07-23 18:44:00.580771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.952 [2024-07-23 18:44:00.584348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.952 [2024-07-23 18:44:00.584412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:00.952 [2024-07-23 18:44:00.584443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.524 ms 00:26:00.952 [2024-07-23 18:44:00.584463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.952 [2024-07-23 18:44:00.584529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.952 [2024-07-23 18:44:00.584556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:00.952 [2024-07-23 18:44:00.584594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:26:00.952 [2024-07-23 18:44:00.584615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.952 [2024-07-23 18:44:00.584707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.952 [2024-07-23 18:44:00.584741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:00.952 [2024-07-23 18:44:00.584773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:26:00.952 [2024-07-23 18:44:00.584791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.952 [2024-07-23 18:44:00.586244] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3859.855 ms, result 0 00:26:00.952 { 00:26:00.952 "name": "ftl0", 00:26:00.952 "uuid": "13fadea6-e45b-447a-8eb6-e8ad4e453dcb" 00:26:00.952 } 00:26:00.952 18:44:00 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:26:00.952 18:44:00 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:26:00.952 18:44:00 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:26:00.952 18:44:00 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:26:00.952 [2024-07-23 18:44:00.961026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.952 [2024-07-23 18:44:00.961110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:00.952 [2024-07-23 18:44:00.961142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:00.952 [2024-07-23 18:44:00.961169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.952 [2024-07-23 18:44:00.961206] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:00.952 [2024-07-23 18:44:00.962528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.952 [2024-07-23 18:44:00.962583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:00.952 [2024-07-23 18:44:00.962603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.273 ms 00:26:00.952 [2024-07-23 18:44:00.962611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.952 [2024-07-23 18:44:00.962825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.952 [2024-07-23 18:44:00.962836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:00.952 [2024-07-23 18:44:00.962847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.193 ms 00:26:00.952 [2024-07-23 18:44:00.962854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.952 [2024-07-23 18:44:00.965349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.952 [2024-07-23 18:44:00.965370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:00.952 [2024-07-23 18:44:00.965380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.480 ms 00:26:00.952 [2024-07-23 18:44:00.965389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.952 [2024-07-23 18:44:00.970392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.952 [2024-07-23 18:44:00.970419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:00.952 [2024-07-23 18:44:00.970431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.994 ms 00:26:00.952 [2024-07-23 18:44:00.970438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.952 [2024-07-23 18:44:00.971893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.952 [2024-07-23 18:44:00.971925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:00.952 [2024-07-23 18:44:00.971940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.392 ms 00:26:00.952 [2024-07-23 18:44:00.971947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.952 [2024-07-23 18:44:00.977275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.952 [2024-07-23 18:44:00.977309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:00.952 [2024-07-23 18:44:00.977322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.306 ms 00:26:00.952 [2024-07-23 18:44:00.977329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.952 [2024-07-23 18:44:00.977443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.952 [2024-07-23 18:44:00.977453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:00.952 [2024-07-23 18:44:00.977464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:26:00.952 [2024-07-23 18:44:00.977475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.952 [2024-07-23 18:44:00.979420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.952 [2024-07-23 18:44:00.979450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:26:00.952 [2024-07-23 18:44:00.979462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.928 ms 00:26:00.952 [2024-07-23 18:44:00.979469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.952 [2024-07-23 18:44:00.980885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.952 [2024-07-23 18:44:00.980914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:26:00.952 [2024-07-23 18:44:00.980928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.385 ms 00:26:00.953 [2024-07-23 18:44:00.980935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.953 [2024-07-23 18:44:00.982160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.953 [2024-07-23 18:44:00.982188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:00.953 [2024-07-23 18:44:00.982199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.196 ms 00:26:00.953 [2024-07-23 18:44:00.982206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.953 [2024-07-23 18:44:00.983384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.953 [2024-07-23 18:44:00.983413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:00.953 [2024-07-23 18:44:00.983424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.129 ms 00:26:00.953 [2024-07-23 18:44:00.983431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.953 [2024-07-23 18:44:00.983457] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:00.953 [2024-07-23 18:44:00.983472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:00.953 [2024-07-23 18:44:00.983488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:00.953 [2024-07-23 18:44:00.983496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:00.953 [2024-07-23 18:44:00.983525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:00.953 [2024-07-23 18:44:00.983534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:00.953 [2024-07-23 18:44:00.983547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:00.953 [2024-07-23 18:44:00.983555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:00.953 [2024-07-23 18:44:00.983565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:00.953 [2024-07-23 18:44:00.983586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:00.953 [2024-07-23 18:44:00.983597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:00.953 [2024-07-23 18:44:00.983604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:00.953 [2024-07-23 18:44:00.983614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:00.953 [2024-07-23 18:44:00.983621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:00.953 [2024-07-23 18:44:00.983632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:00.953 [2024-07-23 18:44:00.983639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:00.953 [2024-07-23 18:44:00.983649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:00.953 [2024-07-23 18:44:00.983657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:00.953 [2024-07-23 18:44:00.983666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:00.953 [2024-07-23 18:44:00.983674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:00.953 [2024-07-23 18:44:00.983684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:00.953 [2024-07-23 18:44:00.983691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:00.953 [2024-07-23 18:44:00.983704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:00.953 [2024-07-23 18:44:00.983711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:00.953 [2024-07-23 18:44:00.983721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:00.953 [2024-07-23 18:44:00.983728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:00.953 [2024-07-23 18:44:00.983739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:00.953 [2024-07-23 18:44:00.983747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:00.953 [2024-07-23 18:44:00.983758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:00.953 [2024-07-23 18:44:00.983766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:00.953 [2024-07-23 18:44:00.983778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:00.953 [2024-07-23 18:44:00.983786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:00.953 [2024-07-23 18:44:00.983798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:00.953 [2024-07-23 18:44:00.983809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:00.953 [2024-07-23 18:44:00.983823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:00.953 [2024-07-23 18:44:00.983832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:00.953 [2024-07-23 18:44:00.983842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:00.953 [2024-07-23 18:44:00.983850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:00.953 [2024-07-23 18:44:00.983863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:00.953 [2024-07-23 18:44:00.983870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:00.953 [2024-07-23 18:44:00.983880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:00.953 [2024-07-23 18:44:00.983888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:00.953 [2024-07-23 18:44:00.983899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:00.953 [2024-07-23 18:44:00.983907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:00.953 [2024-07-23 18:44:00.983917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:00.953 [2024-07-23 18:44:00.983924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:00.953 [2024-07-23 18:44:00.983934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:00.953 [2024-07-23 18:44:00.983945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:00.953 [2024-07-23 18:44:00.983956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:00.953 [2024-07-23 18:44:00.983963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:00.953 [2024-07-23 18:44:00.983973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:00.953 [2024-07-23 18:44:00.983981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:00.953 [2024-07-23 18:44:00.983991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:00.953 [2024-07-23 18:44:00.983999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:00.953 [2024-07-23 18:44:00.984013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:00.953 [2024-07-23 18:44:00.984021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:00.953 [2024-07-23 18:44:00.984032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:00.953 [2024-07-23 18:44:00.984040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:00.953 [2024-07-23 18:44:00.984050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:00.953 [2024-07-23 18:44:00.984058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:00.953 [2024-07-23 18:44:00.984068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:00.953 [2024-07-23 18:44:00.984076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:00.953 [2024-07-23 18:44:00.984087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:00.953 [2024-07-23 18:44:00.984095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:00.953 [2024-07-23 18:44:00.984104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:00.953 [2024-07-23 18:44:00.984112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:00.953 [2024-07-23 18:44:00.984122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:00.953 [2024-07-23 18:44:00.984129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:00.953 [2024-07-23 18:44:00.984139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:00.953 [2024-07-23 18:44:00.984147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:00.953 [2024-07-23 18:44:00.984159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:00.953 [2024-07-23 18:44:00.984167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:00.953 [2024-07-23 18:44:00.984177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:00.953 [2024-07-23 18:44:00.984185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:00.953 [2024-07-23 18:44:00.984194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:00.953 [2024-07-23 18:44:00.984202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:00.953 [2024-07-23 18:44:00.984212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:00.953 [2024-07-23 18:44:00.984219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:00.953 [2024-07-23 18:44:00.984229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:00.953 [2024-07-23 18:44:00.984236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:00.953 [2024-07-23 18:44:00.984247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:00.954 [2024-07-23 18:44:00.984255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:00.954 [2024-07-23 18:44:00.984265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:00.954 [2024-07-23 18:44:00.984272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:00.954 [2024-07-23 18:44:00.984283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:00.954 [2024-07-23 18:44:00.984292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:00.954 [2024-07-23 18:44:00.984304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:00.954 [2024-07-23 18:44:00.984312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:00.954 [2024-07-23 18:44:00.984322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:00.954 [2024-07-23 18:44:00.984330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:00.954 [2024-07-23 18:44:00.984340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:00.954 [2024-07-23 18:44:00.984347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:00.954 [2024-07-23 18:44:00.984357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:00.954 [2024-07-23 18:44:00.984364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:00.954 [2024-07-23 18:44:00.984374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:00.954 [2024-07-23 18:44:00.984382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:00.954 [2024-07-23 18:44:00.984392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:00.954 [2024-07-23 18:44:00.984400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:00.954 [2024-07-23 18:44:00.984410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:00.954 [2024-07-23 18:44:00.984418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:00.954 [2024-07-23 18:44:00.984427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:00.954 [2024-07-23 18:44:00.984441] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:00.954 [2024-07-23 18:44:00.984454] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 13fadea6-e45b-447a-8eb6-e8ad4e453dcb 00:26:00.954 [2024-07-23 18:44:00.984462] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:00.954 [2024-07-23 18:44:00.984471] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:00.954 [2024-07-23 18:44:00.984488] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:00.954 [2024-07-23 18:44:00.984508] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:00.954 [2024-07-23 18:44:00.984515] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:00.954 [2024-07-23 18:44:00.984532] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:00.954 [2024-07-23 18:44:00.984542] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:00.954 [2024-07-23 18:44:00.984551] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:00.954 [2024-07-23 18:44:00.984557] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:00.954 [2024-07-23 18:44:00.984664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.954 [2024-07-23 18:44:00.984691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:00.954 [2024-07-23 18:44:00.984725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.114 ms 00:26:00.954 [2024-07-23 18:44:00.984744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.954 [2024-07-23 18:44:00.987968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.954 [2024-07-23 18:44:00.988028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:00.954 [2024-07-23 18:44:00.988063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.186 ms 00:26:00.954 [2024-07-23 18:44:00.988083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.954 [2024-07-23 18:44:00.988276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.954 [2024-07-23 18:44:00.988307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:00.954 [2024-07-23 18:44:00.988335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.154 ms 00:26:00.954 [2024-07-23 18:44:00.988354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.954 [2024-07-23 18:44:00.999452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:00.954 [2024-07-23 18:44:00.999516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:00.954 [2024-07-23 18:44:00.999555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:00.954 [2024-07-23 18:44:00.999590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.954 [2024-07-23 18:44:00.999660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:00.954 [2024-07-23 18:44:00.999691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:00.954 [2024-07-23 18:44:00.999714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:00.954 [2024-07-23 18:44:00.999744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.954 [2024-07-23 18:44:00.999850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:00.954 [2024-07-23 18:44:00.999907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:00.954 [2024-07-23 18:44:00.999954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:00.954 [2024-07-23 18:44:00.999984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.954 [2024-07-23 18:44:01.000066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:00.954 [2024-07-23 18:44:01.000110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:00.954 [2024-07-23 18:44:01.000155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:00.954 [2024-07-23 18:44:01.000187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.214 [2024-07-23 18:44:01.025715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:01.214 [2024-07-23 18:44:01.025812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:01.214 [2024-07-23 18:44:01.025847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:01.214 [2024-07-23 18:44:01.025867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.214 [2024-07-23 18:44:01.039245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:01.214 [2024-07-23 18:44:01.039330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:01.214 [2024-07-23 18:44:01.039365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:01.214 [2024-07-23 18:44:01.039385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.214 [2024-07-23 18:44:01.039496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:01.214 [2024-07-23 18:44:01.039531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:01.214 [2024-07-23 18:44:01.039666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:01.214 [2024-07-23 18:44:01.039694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.214 [2024-07-23 18:44:01.039799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:01.214 [2024-07-23 18:44:01.039834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:01.214 [2024-07-23 18:44:01.039867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:01.214 [2024-07-23 18:44:01.039887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.214 [2024-07-23 18:44:01.040009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:01.214 [2024-07-23 18:44:01.040047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:01.214 [2024-07-23 18:44:01.040077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:01.214 [2024-07-23 18:44:01.040125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.214 [2024-07-23 18:44:01.040191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:01.214 [2024-07-23 18:44:01.040225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:01.214 [2024-07-23 18:44:01.040260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:01.214 [2024-07-23 18:44:01.040279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.214 [2024-07-23 18:44:01.040363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:01.214 [2024-07-23 18:44:01.040395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:01.214 [2024-07-23 18:44:01.040425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:01.214 [2024-07-23 18:44:01.040435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.214 [2024-07-23 18:44:01.040493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:01.214 [2024-07-23 18:44:01.040507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:01.214 [2024-07-23 18:44:01.040517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:01.214 [2024-07-23 18:44:01.040524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.214 [2024-07-23 18:44:01.040688] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 79.761 ms, result 0 00:26:01.214 true 00:26:01.214 18:44:01 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 94927 00:26:01.214 18:44:01 ftl.ftl_restore_fast -- common/autotest_common.sh@946 -- # '[' -z 94927 ']' 00:26:01.214 18:44:01 ftl.ftl_restore_fast -- common/autotest_common.sh@950 -- # kill -0 94927 00:26:01.214 18:44:01 ftl.ftl_restore_fast -- common/autotest_common.sh@951 -- # uname 00:26:01.214 18:44:01 ftl.ftl_restore_fast -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:01.214 18:44:01 ftl.ftl_restore_fast -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 94927 00:26:01.214 18:44:01 ftl.ftl_restore_fast -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:26:01.214 18:44:01 ftl.ftl_restore_fast -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:26:01.214 18:44:01 ftl.ftl_restore_fast -- common/autotest_common.sh@964 -- # echo 'killing process with pid 94927' 00:26:01.214 killing process with pid 94927 00:26:01.214 18:44:01 ftl.ftl_restore_fast -- common/autotest_common.sh@965 -- # kill 94927 00:26:01.214 18:44:01 ftl.ftl_restore_fast -- common/autotest_common.sh@970 -- # wait 94927 00:26:06.489 18:44:05 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:26:09.777 262144+0 records in 00:26:09.777 262144+0 records out 00:26:09.777 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.32719 s, 323 MB/s 00:26:09.777 18:44:09 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:11.154 18:44:11 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:11.154 [2024-07-23 18:44:11.069005] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:26:11.154 [2024-07-23 18:44:11.069134] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95174 ] 00:26:11.414 [2024-07-23 18:44:11.217214] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:11.414 [2024-07-23 18:44:11.288493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:11.414 [2024-07-23 18:44:11.440668] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:11.414 [2024-07-23 18:44:11.440750] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:11.674 [2024-07-23 18:44:11.591213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.674 [2024-07-23 18:44:11.591267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:11.674 [2024-07-23 18:44:11.591283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:11.674 [2024-07-23 18:44:11.591290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.674 [2024-07-23 18:44:11.591337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.674 [2024-07-23 18:44:11.591346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:11.674 [2024-07-23 18:44:11.591354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:26:11.674 [2024-07-23 18:44:11.591365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.674 [2024-07-23 18:44:11.591382] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:11.674 [2024-07-23 18:44:11.591610] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:11.674 [2024-07-23 18:44:11.591630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.674 [2024-07-23 18:44:11.591649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:11.674 [2024-07-23 18:44:11.591657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.252 ms 00:26:11.674 [2024-07-23 18:44:11.591664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.674 [2024-07-23 18:44:11.594084] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:11.674 [2024-07-23 18:44:11.597536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.674 [2024-07-23 18:44:11.597582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:11.674 [2024-07-23 18:44:11.597598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.459 ms 00:26:11.674 [2024-07-23 18:44:11.597607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.674 [2024-07-23 18:44:11.597662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.674 [2024-07-23 18:44:11.597671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:11.674 [2024-07-23 18:44:11.597680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:26:11.674 [2024-07-23 18:44:11.597697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.674 [2024-07-23 18:44:11.610082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.674 [2024-07-23 18:44:11.610110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:11.674 [2024-07-23 18:44:11.610129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.370 ms 00:26:11.674 [2024-07-23 18:44:11.610136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.674 [2024-07-23 18:44:11.610229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.674 [2024-07-23 18:44:11.610240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:11.674 [2024-07-23 18:44:11.610249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:26:11.674 [2024-07-23 18:44:11.610256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.674 [2024-07-23 18:44:11.610313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.674 [2024-07-23 18:44:11.610323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:11.674 [2024-07-23 18:44:11.610338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:26:11.675 [2024-07-23 18:44:11.610345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.675 [2024-07-23 18:44:11.610371] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:11.675 [2024-07-23 18:44:11.613090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.675 [2024-07-23 18:44:11.613113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:11.675 [2024-07-23 18:44:11.613121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.734 ms 00:26:11.675 [2024-07-23 18:44:11.613129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.675 [2024-07-23 18:44:11.613159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.675 [2024-07-23 18:44:11.613167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:11.675 [2024-07-23 18:44:11.613178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:26:11.675 [2024-07-23 18:44:11.613196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.675 [2024-07-23 18:44:11.613220] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:11.675 [2024-07-23 18:44:11.613244] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:11.675 [2024-07-23 18:44:11.613286] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:11.675 [2024-07-23 18:44:11.613305] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:26:11.675 [2024-07-23 18:44:11.613387] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:11.675 [2024-07-23 18:44:11.613404] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:11.675 [2024-07-23 18:44:11.613421] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:26:11.675 [2024-07-23 18:44:11.613431] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:11.675 [2024-07-23 18:44:11.613440] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:11.675 [2024-07-23 18:44:11.613448] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:11.675 [2024-07-23 18:44:11.613456] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:11.675 [2024-07-23 18:44:11.613463] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:11.675 [2024-07-23 18:44:11.613471] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:11.675 [2024-07-23 18:44:11.613480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.675 [2024-07-23 18:44:11.613488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:11.675 [2024-07-23 18:44:11.613495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:26:11.675 [2024-07-23 18:44:11.613506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.675 [2024-07-23 18:44:11.613670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.675 [2024-07-23 18:44:11.613708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:11.675 [2024-07-23 18:44:11.613729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.149 ms 00:26:11.675 [2024-07-23 18:44:11.613748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.675 [2024-07-23 18:44:11.613848] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:11.675 [2024-07-23 18:44:11.613878] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:11.675 [2024-07-23 18:44:11.613908] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:11.675 [2024-07-23 18:44:11.614002] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:11.675 [2024-07-23 18:44:11.614042] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:11.675 [2024-07-23 18:44:11.614061] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:11.675 [2024-07-23 18:44:11.614089] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:11.675 [2024-07-23 18:44:11.614108] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:11.675 [2024-07-23 18:44:11.614138] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:11.675 [2024-07-23 18:44:11.614159] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:11.675 [2024-07-23 18:44:11.614189] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:11.675 [2024-07-23 18:44:11.614230] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:11.675 [2024-07-23 18:44:11.614258] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:11.675 [2024-07-23 18:44:11.614277] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:11.675 [2024-07-23 18:44:11.614306] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:11.675 [2024-07-23 18:44:11.614335] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:11.675 [2024-07-23 18:44:11.614369] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:11.675 [2024-07-23 18:44:11.614400] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:11.675 [2024-07-23 18:44:11.614437] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:11.675 [2024-07-23 18:44:11.614446] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:11.675 [2024-07-23 18:44:11.614452] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:11.675 [2024-07-23 18:44:11.614459] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:11.675 [2024-07-23 18:44:11.614465] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:11.675 [2024-07-23 18:44:11.614472] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:11.675 [2024-07-23 18:44:11.614479] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:11.675 [2024-07-23 18:44:11.614485] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:11.675 [2024-07-23 18:44:11.614492] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:11.675 [2024-07-23 18:44:11.614498] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:11.675 [2024-07-23 18:44:11.614505] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:11.675 [2024-07-23 18:44:11.614511] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:11.675 [2024-07-23 18:44:11.614518] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:11.675 [2024-07-23 18:44:11.614524] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:11.675 [2024-07-23 18:44:11.614542] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:11.675 [2024-07-23 18:44:11.614548] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:11.675 [2024-07-23 18:44:11.614557] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:11.675 [2024-07-23 18:44:11.614565] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:11.675 [2024-07-23 18:44:11.614585] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:11.675 [2024-07-23 18:44:11.614591] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:11.675 [2024-07-23 18:44:11.614598] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:11.675 [2024-07-23 18:44:11.614604] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:11.675 [2024-07-23 18:44:11.614613] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:11.675 [2024-07-23 18:44:11.614624] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:11.675 [2024-07-23 18:44:11.614633] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:11.675 [2024-07-23 18:44:11.614639] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:11.675 [2024-07-23 18:44:11.614648] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:11.675 [2024-07-23 18:44:11.614655] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:11.675 [2024-07-23 18:44:11.614665] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:11.675 [2024-07-23 18:44:11.614674] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:11.675 [2024-07-23 18:44:11.614684] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:11.675 [2024-07-23 18:44:11.614692] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:11.675 [2024-07-23 18:44:11.614701] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:11.675 [2024-07-23 18:44:11.614709] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:11.675 [2024-07-23 18:44:11.614716] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:11.675 [2024-07-23 18:44:11.614727] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:11.675 [2024-07-23 18:44:11.614736] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:11.675 [2024-07-23 18:44:11.614747] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:11.675 [2024-07-23 18:44:11.614755] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:11.675 [2024-07-23 18:44:11.614762] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:11.675 [2024-07-23 18:44:11.614769] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:11.675 [2024-07-23 18:44:11.614777] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:11.675 [2024-07-23 18:44:11.614784] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:11.675 [2024-07-23 18:44:11.614790] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:11.675 [2024-07-23 18:44:11.614797] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:11.675 [2024-07-23 18:44:11.614803] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:11.675 [2024-07-23 18:44:11.614812] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:11.675 [2024-07-23 18:44:11.614819] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:11.675 [2024-07-23 18:44:11.614826] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:11.676 [2024-07-23 18:44:11.614832] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:11.676 [2024-07-23 18:44:11.614839] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:11.676 [2024-07-23 18:44:11.614846] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:11.676 [2024-07-23 18:44:11.614854] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:11.676 [2024-07-23 18:44:11.614861] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:11.676 [2024-07-23 18:44:11.614868] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:11.676 [2024-07-23 18:44:11.614890] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:11.676 [2024-07-23 18:44:11.614897] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:11.676 [2024-07-23 18:44:11.614906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.676 [2024-07-23 18:44:11.614914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:11.676 [2024-07-23 18:44:11.614921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.115 ms 00:26:11.676 [2024-07-23 18:44:11.614938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.676 [2024-07-23 18:44:11.647118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.676 [2024-07-23 18:44:11.647207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:11.676 [2024-07-23 18:44:11.647244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.173 ms 00:26:11.676 [2024-07-23 18:44:11.647271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.676 [2024-07-23 18:44:11.647397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.676 [2024-07-23 18:44:11.647423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:11.676 [2024-07-23 18:44:11.647448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:26:11.676 [2024-07-23 18:44:11.647477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.676 [2024-07-23 18:44:11.663894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.676 [2024-07-23 18:44:11.663979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:11.676 [2024-07-23 18:44:11.664009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.275 ms 00:26:11.676 [2024-07-23 18:44:11.664030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.676 [2024-07-23 18:44:11.664089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.676 [2024-07-23 18:44:11.664112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:11.676 [2024-07-23 18:44:11.664131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:11.676 [2024-07-23 18:44:11.664166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.676 [2024-07-23 18:44:11.665006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.676 [2024-07-23 18:44:11.665060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:11.676 [2024-07-23 18:44:11.665092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.771 ms 00:26:11.676 [2024-07-23 18:44:11.665113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.676 [2024-07-23 18:44:11.665298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.676 [2024-07-23 18:44:11.665335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:11.676 [2024-07-23 18:44:11.665362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:26:11.676 [2024-07-23 18:44:11.665387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.676 [2024-07-23 18:44:11.675318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.676 [2024-07-23 18:44:11.675381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:11.676 [2024-07-23 18:44:11.675419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.891 ms 00:26:11.676 [2024-07-23 18:44:11.675439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.676 [2024-07-23 18:44:11.679106] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:26:11.676 [2024-07-23 18:44:11.679183] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:11.676 [2024-07-23 18:44:11.679232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.676 [2024-07-23 18:44:11.679252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:11.676 [2024-07-23 18:44:11.679271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.657 ms 00:26:11.676 [2024-07-23 18:44:11.679290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.676 [2024-07-23 18:44:11.692195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.676 [2024-07-23 18:44:11.692278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:11.676 [2024-07-23 18:44:11.692306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.880 ms 00:26:11.676 [2024-07-23 18:44:11.692326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.676 [2024-07-23 18:44:11.694177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.676 [2024-07-23 18:44:11.694238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:11.676 [2024-07-23 18:44:11.694269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.798 ms 00:26:11.676 [2024-07-23 18:44:11.694288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.676 [2024-07-23 18:44:11.695751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.676 [2024-07-23 18:44:11.695810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:11.676 [2024-07-23 18:44:11.695839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.413 ms 00:26:11.676 [2024-07-23 18:44:11.695858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.676 [2024-07-23 18:44:11.696186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.676 [2024-07-23 18:44:11.696231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:11.676 [2024-07-23 18:44:11.696279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.244 ms 00:26:11.676 [2024-07-23 18:44:11.696302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.936 [2024-07-23 18:44:11.726790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.936 [2024-07-23 18:44:11.726952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:11.936 [2024-07-23 18:44:11.726986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.464 ms 00:26:11.936 [2024-07-23 18:44:11.727008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.936 [2024-07-23 18:44:11.734335] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:11.936 [2024-07-23 18:44:11.738830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.936 [2024-07-23 18:44:11.738902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:11.936 [2024-07-23 18:44:11.738932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.778 ms 00:26:11.936 [2024-07-23 18:44:11.738951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.936 [2024-07-23 18:44:11.739053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.936 [2024-07-23 18:44:11.739089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:11.936 [2024-07-23 18:44:11.739109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:11.936 [2024-07-23 18:44:11.739173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.936 [2024-07-23 18:44:11.739267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.936 [2024-07-23 18:44:11.739293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:11.936 [2024-07-23 18:44:11.739337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:26:11.936 [2024-07-23 18:44:11.739370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.936 [2024-07-23 18:44:11.739408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.936 [2024-07-23 18:44:11.739431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:11.936 [2024-07-23 18:44:11.739461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:11.936 [2024-07-23 18:44:11.739481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.936 [2024-07-23 18:44:11.739539] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:11.936 [2024-07-23 18:44:11.739599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.936 [2024-07-23 18:44:11.739621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:11.936 [2024-07-23 18:44:11.739661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:26:11.936 [2024-07-23 18:44:11.739716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.936 [2024-07-23 18:44:11.744579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.936 [2024-07-23 18:44:11.744649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:11.936 [2024-07-23 18:44:11.744682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.801 ms 00:26:11.936 [2024-07-23 18:44:11.744703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.936 [2024-07-23 18:44:11.744806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.936 [2024-07-23 18:44:11.744849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:11.936 [2024-07-23 18:44:11.744877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:26:11.936 [2024-07-23 18:44:11.744898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.936 [2024-07-23 18:44:11.746321] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 154.936 ms, result 0 00:26:45.777  Copying: 29/1024 [MB] (29 MBps) Copying: 59/1024 [MB] (29 MBps) Copying: 89/1024 [MB] (29 MBps) Copying: 119/1024 [MB] (30 MBps) Copying: 149/1024 [MB] (30 MBps) Copying: 178/1024 [MB] (28 MBps) Copying: 208/1024 [MB] (30 MBps) Copying: 239/1024 [MB] (30 MBps) Copying: 269/1024 [MB] (30 MBps) Copying: 299/1024 [MB] (30 MBps) Copying: 329/1024 [MB] (29 MBps) Copying: 359/1024 [MB] (29 MBps) Copying: 389/1024 [MB] (30 MBps) Copying: 420/1024 [MB] (30 MBps) Copying: 450/1024 [MB] (29 MBps) Copying: 479/1024 [MB] (29 MBps) Copying: 509/1024 [MB] (30 MBps) Copying: 540/1024 [MB] (30 MBps) Copying: 570/1024 [MB] (30 MBps) Copying: 600/1024 [MB] (30 MBps) Copying: 630/1024 [MB] (30 MBps) Copying: 659/1024 [MB] (29 MBps) Copying: 690/1024 [MB] (30 MBps) Copying: 720/1024 [MB] (30 MBps) Copying: 750/1024 [MB] (29 MBps) Copying: 780/1024 [MB] (29 MBps) Copying: 810/1024 [MB] (30 MBps) Copying: 841/1024 [MB] (30 MBps) Copying: 872/1024 [MB] (31 MBps) Copying: 903/1024 [MB] (30 MBps) Copying: 933/1024 [MB] (30 MBps) Copying: 964/1024 [MB] (30 MBps) Copying: 994/1024 [MB] (30 MBps) Copying: 1024/1024 [MB] (average 30 MBps)[2024-07-23 18:44:45.628483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.777 [2024-07-23 18:44:45.628555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:45.777 [2024-07-23 18:44:45.628585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:45.777 [2024-07-23 18:44:45.628600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.777 [2024-07-23 18:44:45.628624] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:45.777 [2024-07-23 18:44:45.629891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.777 [2024-07-23 18:44:45.629909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:45.777 [2024-07-23 18:44:45.629917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.257 ms 00:26:45.777 [2024-07-23 18:44:45.629924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.777 [2024-07-23 18:44:45.631700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.778 [2024-07-23 18:44:45.631739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:45.778 [2024-07-23 18:44:45.631767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.764 ms 00:26:45.778 [2024-07-23 18:44:45.631776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.778 [2024-07-23 18:44:45.631814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.778 [2024-07-23 18:44:45.631824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:26:45.778 [2024-07-23 18:44:45.631833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:45.778 [2024-07-23 18:44:45.631841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.778 [2024-07-23 18:44:45.631897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.778 [2024-07-23 18:44:45.631916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:26:45.778 [2024-07-23 18:44:45.631925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:26:45.778 [2024-07-23 18:44:45.631933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.778 [2024-07-23 18:44:45.631948] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:45.778 [2024-07-23 18:44:45.631966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:45.778 [2024-07-23 18:44:45.631976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:45.778 [2024-07-23 18:44:45.631985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:45.778 [2024-07-23 18:44:45.631994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:45.778 [2024-07-23 18:44:45.632002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:45.778 [2024-07-23 18:44:45.632011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:45.778 [2024-07-23 18:44:45.632020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:45.778 [2024-07-23 18:44:45.632029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:45.778 [2024-07-23 18:44:45.632038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:45.778 [2024-07-23 18:44:45.632046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:45.778 [2024-07-23 18:44:45.632055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:45.778 [2024-07-23 18:44:45.632063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:45.778 [2024-07-23 18:44:45.632071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:45.778 [2024-07-23 18:44:45.632080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:45.778 [2024-07-23 18:44:45.632087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:45.778 [2024-07-23 18:44:45.632096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:45.778 [2024-07-23 18:44:45.632103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:45.778 [2024-07-23 18:44:45.632111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:45.778 [2024-07-23 18:44:45.632120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:45.778 [2024-07-23 18:44:45.632128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:45.778 [2024-07-23 18:44:45.632148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:45.778 [2024-07-23 18:44:45.632156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:45.778 [2024-07-23 18:44:45.632163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:45.778 [2024-07-23 18:44:45.632171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:45.778 [2024-07-23 18:44:45.632181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:45.778 [2024-07-23 18:44:45.632189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:45.778 [2024-07-23 18:44:45.632196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:45.778 [2024-07-23 18:44:45.632204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:45.778 [2024-07-23 18:44:45.632211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:45.778 [2024-07-23 18:44:45.632219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:45.778 [2024-07-23 18:44:45.632226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:45.778 [2024-07-23 18:44:45.632233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:45.778 [2024-07-23 18:44:45.632240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:45.778 [2024-07-23 18:44:45.632247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:45.778 [2024-07-23 18:44:45.632255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:45.778 [2024-07-23 18:44:45.632262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:45.778 [2024-07-23 18:44:45.632269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:45.778 [2024-07-23 18:44:45.632276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:45.778 [2024-07-23 18:44:45.632284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:45.778 [2024-07-23 18:44:45.632291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:45.778 [2024-07-23 18:44:45.632298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:45.778 [2024-07-23 18:44:45.632306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:45.778 [2024-07-23 18:44:45.632313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:45.778 [2024-07-23 18:44:45.632320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:45.778 [2024-07-23 18:44:45.632327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:45.778 [2024-07-23 18:44:45.632334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:45.778 [2024-07-23 18:44:45.632342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:45.778 [2024-07-23 18:44:45.632349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:45.778 [2024-07-23 18:44:45.632356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:45.778 [2024-07-23 18:44:45.632364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:45.778 [2024-07-23 18:44:45.632371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:45.778 [2024-07-23 18:44:45.632378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:45.778 [2024-07-23 18:44:45.632385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:45.778 [2024-07-23 18:44:45.632392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:45.778 [2024-07-23 18:44:45.632400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:45.778 [2024-07-23 18:44:45.632408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:45.778 [2024-07-23 18:44:45.632415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:45.778 [2024-07-23 18:44:45.632422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:45.778 [2024-07-23 18:44:45.632429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:45.778 [2024-07-23 18:44:45.632436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:45.778 [2024-07-23 18:44:45.632443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:45.778 [2024-07-23 18:44:45.632451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:45.778 [2024-07-23 18:44:45.632459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:45.778 [2024-07-23 18:44:45.632475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:45.778 [2024-07-23 18:44:45.632484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:45.778 [2024-07-23 18:44:45.632492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:45.778 [2024-07-23 18:44:45.632500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:45.778 [2024-07-23 18:44:45.632507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:45.778 [2024-07-23 18:44:45.632515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:45.778 [2024-07-23 18:44:45.632522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:45.778 [2024-07-23 18:44:45.632530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:45.778 [2024-07-23 18:44:45.632538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:45.778 [2024-07-23 18:44:45.632559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:45.778 [2024-07-23 18:44:45.632583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:45.778 [2024-07-23 18:44:45.632592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:45.778 [2024-07-23 18:44:45.632599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:45.778 [2024-07-23 18:44:45.632609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:45.778 [2024-07-23 18:44:45.632620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:45.778 [2024-07-23 18:44:45.632628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:45.779 [2024-07-23 18:44:45.632639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:45.779 [2024-07-23 18:44:45.632648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:45.779 [2024-07-23 18:44:45.632669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:45.779 [2024-07-23 18:44:45.632678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:45.779 [2024-07-23 18:44:45.632686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:45.779 [2024-07-23 18:44:45.632709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:45.779 [2024-07-23 18:44:45.632716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:45.779 [2024-07-23 18:44:45.632723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:45.779 [2024-07-23 18:44:45.632731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:45.779 [2024-07-23 18:44:45.632739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:45.779 [2024-07-23 18:44:45.632762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:45.779 [2024-07-23 18:44:45.632776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:45.779 [2024-07-23 18:44:45.632797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:45.779 [2024-07-23 18:44:45.632804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:45.779 [2024-07-23 18:44:45.632810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:45.779 [2024-07-23 18:44:45.632817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:45.779 [2024-07-23 18:44:45.632824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:45.779 [2024-07-23 18:44:45.632831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:45.779 [2024-07-23 18:44:45.632838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:45.779 [2024-07-23 18:44:45.632845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:45.779 [2024-07-23 18:44:45.632852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:45.779 [2024-07-23 18:44:45.632868] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:45.779 [2024-07-23 18:44:45.632875] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 13fadea6-e45b-447a-8eb6-e8ad4e453dcb 00:26:45.779 [2024-07-23 18:44:45.632882] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:45.779 [2024-07-23 18:44:45.632889] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:26:45.779 [2024-07-23 18:44:45.632898] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:45.779 [2024-07-23 18:44:45.632913] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:45.779 [2024-07-23 18:44:45.632920] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:45.779 [2024-07-23 18:44:45.632927] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:45.779 [2024-07-23 18:44:45.632934] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:45.779 [2024-07-23 18:44:45.632940] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:45.779 [2024-07-23 18:44:45.632946] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:45.779 [2024-07-23 18:44:45.632953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.779 [2024-07-23 18:44:45.632960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:45.779 [2024-07-23 18:44:45.632971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.007 ms 00:26:45.779 [2024-07-23 18:44:45.632979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.779 [2024-07-23 18:44:45.636022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.779 [2024-07-23 18:44:45.636043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:45.779 [2024-07-23 18:44:45.636052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.035 ms 00:26:45.779 [2024-07-23 18:44:45.636060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.779 [2024-07-23 18:44:45.636229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.779 [2024-07-23 18:44:45.636243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:45.779 [2024-07-23 18:44:45.636251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.154 ms 00:26:45.779 [2024-07-23 18:44:45.636258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.779 [2024-07-23 18:44:45.645453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:45.779 [2024-07-23 18:44:45.645519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:45.779 [2024-07-23 18:44:45.645561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:45.779 [2024-07-23 18:44:45.645597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.779 [2024-07-23 18:44:45.645676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:45.779 [2024-07-23 18:44:45.645716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:45.779 [2024-07-23 18:44:45.645747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:45.779 [2024-07-23 18:44:45.645768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.779 [2024-07-23 18:44:45.645858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:45.779 [2024-07-23 18:44:45.645900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:45.779 [2024-07-23 18:44:45.645932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:45.779 [2024-07-23 18:44:45.645973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.779 [2024-07-23 18:44:45.646011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:45.779 [2024-07-23 18:44:45.646042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:45.779 [2024-07-23 18:44:45.646074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:45.779 [2024-07-23 18:44:45.646103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.779 [2024-07-23 18:44:45.669240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:45.779 [2024-07-23 18:44:45.669334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:45.779 [2024-07-23 18:44:45.669364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:45.779 [2024-07-23 18:44:45.669386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.779 [2024-07-23 18:44:45.683303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:45.779 [2024-07-23 18:44:45.683390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:45.779 [2024-07-23 18:44:45.683420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:45.779 [2024-07-23 18:44:45.683453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.779 [2024-07-23 18:44:45.683530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:45.779 [2024-07-23 18:44:45.683565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:45.779 [2024-07-23 18:44:45.683660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:45.779 [2024-07-23 18:44:45.683680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.779 [2024-07-23 18:44:45.683779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:45.779 [2024-07-23 18:44:45.683800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:45.779 [2024-07-23 18:44:45.683831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:45.779 [2024-07-23 18:44:45.683874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.779 [2024-07-23 18:44:45.683954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:45.779 [2024-07-23 18:44:45.683999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:45.779 [2024-07-23 18:44:45.684026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:45.779 [2024-07-23 18:44:45.684046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.779 [2024-07-23 18:44:45.684100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:45.779 [2024-07-23 18:44:45.684133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:45.779 [2024-07-23 18:44:45.684162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:45.779 [2024-07-23 18:44:45.684190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.779 [2024-07-23 18:44:45.684266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:45.779 [2024-07-23 18:44:45.684297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:45.779 [2024-07-23 18:44:45.684326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:45.779 [2024-07-23 18:44:45.684353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.779 [2024-07-23 18:44:45.684422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:45.779 [2024-07-23 18:44:45.684453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:45.779 [2024-07-23 18:44:45.684481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:45.779 [2024-07-23 18:44:45.684525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.779 [2024-07-23 18:44:45.684752] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 56.298 ms, result 0 00:26:47.159 00:26:47.159 00:26:47.159 18:44:46 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:26:47.159 [2024-07-23 18:44:46.950922] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:26:47.159 [2024-07-23 18:44:46.951124] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95539 ] 00:26:47.159 [2024-07-23 18:44:47.097823] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:47.159 [2024-07-23 18:44:47.168729] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:47.418 [2024-07-23 18:44:47.320082] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:47.418 [2024-07-23 18:44:47.320243] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:47.418 [2024-07-23 18:44:47.470477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.418 [2024-07-23 18:44:47.470640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:47.418 [2024-07-23 18:44:47.470676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:47.418 [2024-07-23 18:44:47.470697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.418 [2024-07-23 18:44:47.470768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.418 [2024-07-23 18:44:47.470799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:47.418 [2024-07-23 18:44:47.470836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:26:47.418 [2024-07-23 18:44:47.470860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.418 [2024-07-23 18:44:47.470929] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:47.418 [2024-07-23 18:44:47.471209] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:47.418 [2024-07-23 18:44:47.471272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.418 [2024-07-23 18:44:47.471306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:47.678 [2024-07-23 18:44:47.471326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.349 ms 00:26:47.678 [2024-07-23 18:44:47.471373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.678 [2024-07-23 18:44:47.471717] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:26:47.678 [2024-07-23 18:44:47.471773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.678 [2024-07-23 18:44:47.471811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:47.678 [2024-07-23 18:44:47.471854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:26:47.678 [2024-07-23 18:44:47.471878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.678 [2024-07-23 18:44:47.471960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.678 [2024-07-23 18:44:47.471993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:47.678 [2024-07-23 18:44:47.472022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:26:47.678 [2024-07-23 18:44:47.472042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.678 [2024-07-23 18:44:47.472307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.678 [2024-07-23 18:44:47.472347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:47.678 [2024-07-23 18:44:47.472376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.206 ms 00:26:47.678 [2024-07-23 18:44:47.472400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.678 [2024-07-23 18:44:47.472527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.678 [2024-07-23 18:44:47.472565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:47.678 [2024-07-23 18:44:47.472606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:26:47.678 [2024-07-23 18:44:47.472635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.678 [2024-07-23 18:44:47.472676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.678 [2024-07-23 18:44:47.472707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:47.678 [2024-07-23 18:44:47.472735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:47.678 [2024-07-23 18:44:47.472768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.678 [2024-07-23 18:44:47.472833] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:47.678 [2024-07-23 18:44:47.475624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.678 [2024-07-23 18:44:47.475682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:47.678 [2024-07-23 18:44:47.475728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.801 ms 00:26:47.678 [2024-07-23 18:44:47.475748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.678 [2024-07-23 18:44:47.475793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.678 [2024-07-23 18:44:47.475806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:47.678 [2024-07-23 18:44:47.475815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:47.678 [2024-07-23 18:44:47.475830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.678 [2024-07-23 18:44:47.475871] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:47.678 [2024-07-23 18:44:47.475896] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:47.678 [2024-07-23 18:44:47.475943] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:47.678 [2024-07-23 18:44:47.475961] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:26:47.678 [2024-07-23 18:44:47.476054] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:47.678 [2024-07-23 18:44:47.476066] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:47.678 [2024-07-23 18:44:47.476084] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:26:47.678 [2024-07-23 18:44:47.476099] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:47.678 [2024-07-23 18:44:47.476109] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:47.678 [2024-07-23 18:44:47.476118] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:47.678 [2024-07-23 18:44:47.476126] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:47.678 [2024-07-23 18:44:47.476134] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:47.678 [2024-07-23 18:44:47.476142] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:47.678 [2024-07-23 18:44:47.476155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.678 [2024-07-23 18:44:47.476163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:47.678 [2024-07-23 18:44:47.476171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:26:47.678 [2024-07-23 18:44:47.476178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.678 [2024-07-23 18:44:47.476244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.678 [2024-07-23 18:44:47.476256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:47.678 [2024-07-23 18:44:47.476264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:26:47.678 [2024-07-23 18:44:47.476271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.678 [2024-07-23 18:44:47.476348] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:47.678 [2024-07-23 18:44:47.476358] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:47.678 [2024-07-23 18:44:47.476375] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:47.678 [2024-07-23 18:44:47.476382] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:47.678 [2024-07-23 18:44:47.476390] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:47.678 [2024-07-23 18:44:47.476397] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:47.678 [2024-07-23 18:44:47.476405] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:47.678 [2024-07-23 18:44:47.476415] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:47.678 [2024-07-23 18:44:47.476423] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:47.678 [2024-07-23 18:44:47.476430] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:47.678 [2024-07-23 18:44:47.476438] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:47.678 [2024-07-23 18:44:47.476446] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:47.678 [2024-07-23 18:44:47.476455] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:47.678 [2024-07-23 18:44:47.476462] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:47.678 [2024-07-23 18:44:47.476469] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:47.678 [2024-07-23 18:44:47.476476] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:47.678 [2024-07-23 18:44:47.476483] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:47.678 [2024-07-23 18:44:47.476490] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:47.678 [2024-07-23 18:44:47.476496] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:47.678 [2024-07-23 18:44:47.476503] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:47.678 [2024-07-23 18:44:47.476510] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:47.678 [2024-07-23 18:44:47.476517] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:47.678 [2024-07-23 18:44:47.476523] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:47.678 [2024-07-23 18:44:47.476533] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:47.678 [2024-07-23 18:44:47.476540] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:47.678 [2024-07-23 18:44:47.476547] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:47.678 [2024-07-23 18:44:47.476554] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:47.678 [2024-07-23 18:44:47.476560] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:47.678 [2024-07-23 18:44:47.476581] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:47.678 [2024-07-23 18:44:47.476589] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:47.678 [2024-07-23 18:44:47.476596] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:47.678 [2024-07-23 18:44:47.476603] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:47.678 [2024-07-23 18:44:47.476610] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:47.678 [2024-07-23 18:44:47.476618] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:47.678 [2024-07-23 18:44:47.476624] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:47.678 [2024-07-23 18:44:47.476631] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:47.678 [2024-07-23 18:44:47.476637] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:47.679 [2024-07-23 18:44:47.476645] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:47.679 [2024-07-23 18:44:47.476652] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:47.679 [2024-07-23 18:44:47.476664] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:47.679 [2024-07-23 18:44:47.476671] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:47.679 [2024-07-23 18:44:47.476678] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:47.679 [2024-07-23 18:44:47.476685] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:47.679 [2024-07-23 18:44:47.476692] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:47.679 [2024-07-23 18:44:47.476701] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:47.679 [2024-07-23 18:44:47.476712] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:47.679 [2024-07-23 18:44:47.476727] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:47.679 [2024-07-23 18:44:47.476735] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:47.679 [2024-07-23 18:44:47.476743] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:47.679 [2024-07-23 18:44:47.476762] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:47.679 [2024-07-23 18:44:47.476769] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:47.679 [2024-07-23 18:44:47.476775] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:47.679 [2024-07-23 18:44:47.476781] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:47.679 [2024-07-23 18:44:47.476790] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:47.679 [2024-07-23 18:44:47.476798] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:47.679 [2024-07-23 18:44:47.476810] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:47.679 [2024-07-23 18:44:47.476818] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:47.679 [2024-07-23 18:44:47.476825] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:47.679 [2024-07-23 18:44:47.476832] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:47.679 [2024-07-23 18:44:47.476839] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:47.679 [2024-07-23 18:44:47.476846] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:47.679 [2024-07-23 18:44:47.476853] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:47.679 [2024-07-23 18:44:47.476860] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:47.679 [2024-07-23 18:44:47.476867] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:47.679 [2024-07-23 18:44:47.476875] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:47.679 [2024-07-23 18:44:47.476882] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:47.679 [2024-07-23 18:44:47.476889] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:47.679 [2024-07-23 18:44:47.476896] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:47.679 [2024-07-23 18:44:47.476903] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:47.679 [2024-07-23 18:44:47.476910] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:47.679 [2024-07-23 18:44:47.476917] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:47.679 [2024-07-23 18:44:47.476926] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:47.679 [2024-07-23 18:44:47.476934] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:47.679 [2024-07-23 18:44:47.476941] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:47.679 [2024-07-23 18:44:47.476948] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:47.679 [2024-07-23 18:44:47.476965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.679 [2024-07-23 18:44:47.476973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:47.679 [2024-07-23 18:44:47.476980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.671 ms 00:26:47.679 [2024-07-23 18:44:47.476987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.679 [2024-07-23 18:44:47.500249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.679 [2024-07-23 18:44:47.500332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:47.679 [2024-07-23 18:44:47.500369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.261 ms 00:26:47.679 [2024-07-23 18:44:47.500395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.679 [2024-07-23 18:44:47.500513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.679 [2024-07-23 18:44:47.500597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:47.679 [2024-07-23 18:44:47.500637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:26:47.679 [2024-07-23 18:44:47.500677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.679 [2024-07-23 18:44:47.516765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.679 [2024-07-23 18:44:47.516841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:47.679 [2024-07-23 18:44:47.516869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.027 ms 00:26:47.679 [2024-07-23 18:44:47.516889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.679 [2024-07-23 18:44:47.516938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.679 [2024-07-23 18:44:47.516977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:47.679 [2024-07-23 18:44:47.516999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:47.679 [2024-07-23 18:44:47.517019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.679 [2024-07-23 18:44:47.517128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.679 [2024-07-23 18:44:47.517170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:47.679 [2024-07-23 18:44:47.517198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:26:47.679 [2024-07-23 18:44:47.517225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.679 [2024-07-23 18:44:47.517354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.679 [2024-07-23 18:44:47.517391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:47.679 [2024-07-23 18:44:47.517419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:26:47.679 [2024-07-23 18:44:47.517448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.679 [2024-07-23 18:44:47.527272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.679 [2024-07-23 18:44:47.527335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:47.679 [2024-07-23 18:44:47.527361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.796 ms 00:26:47.679 [2024-07-23 18:44:47.527381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.679 [2024-07-23 18:44:47.527538] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:47.679 [2024-07-23 18:44:47.527786] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:47.679 [2024-07-23 18:44:47.527825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.679 [2024-07-23 18:44:47.527877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:47.679 [2024-07-23 18:44:47.527913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.323 ms 00:26:47.679 [2024-07-23 18:44:47.527932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.679 [2024-07-23 18:44:47.538147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.679 [2024-07-23 18:44:47.538215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:47.679 [2024-07-23 18:44:47.538256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.200 ms 00:26:47.679 [2024-07-23 18:44:47.538276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.679 [2024-07-23 18:44:47.538411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.679 [2024-07-23 18:44:47.538441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:47.679 [2024-07-23 18:44:47.538467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:26:47.679 [2024-07-23 18:44:47.538492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.679 [2024-07-23 18:44:47.538583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.679 [2024-07-23 18:44:47.538623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:47.679 [2024-07-23 18:44:47.538662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:26:47.679 [2024-07-23 18:44:47.538693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.679 [2024-07-23 18:44:47.538961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.679 [2024-07-23 18:44:47.539004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:47.679 [2024-07-23 18:44:47.539033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.201 ms 00:26:47.679 [2024-07-23 18:44:47.539052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.679 [2024-07-23 18:44:47.539100] mngt/ftl_mngt_p2l.c: 132:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:26:47.679 [2024-07-23 18:44:47.539142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.679 [2024-07-23 18:44:47.539176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:47.679 [2024-07-23 18:44:47.539202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:26:47.679 [2024-07-23 18:44:47.539230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.679 [2024-07-23 18:44:47.547594] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:47.679 [2024-07-23 18:44:47.547753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.679 [2024-07-23 18:44:47.547780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:47.680 [2024-07-23 18:44:47.547830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.500 ms 00:26:47.680 [2024-07-23 18:44:47.547862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.680 [2024-07-23 18:44:47.549997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.680 [2024-07-23 18:44:47.550048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:47.680 [2024-07-23 18:44:47.550072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.101 ms 00:26:47.680 [2024-07-23 18:44:47.550091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.680 [2024-07-23 18:44:47.550177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.680 [2024-07-23 18:44:47.550201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:47.680 [2024-07-23 18:44:47.550221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:26:47.680 [2024-07-23 18:44:47.550242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.680 [2024-07-23 18:44:47.550298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.680 [2024-07-23 18:44:47.550328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:47.680 [2024-07-23 18:44:47.550373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:47.680 [2024-07-23 18:44:47.550401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.680 [2024-07-23 18:44:47.550460] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:47.680 [2024-07-23 18:44:47.550496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.680 [2024-07-23 18:44:47.550515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:47.680 [2024-07-23 18:44:47.550607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:26:47.680 [2024-07-23 18:44:47.550634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.680 [2024-07-23 18:44:47.555877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.680 [2024-07-23 18:44:47.555946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:47.680 [2024-07-23 18:44:47.555977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.193 ms 00:26:47.680 [2024-07-23 18:44:47.556016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.680 [2024-07-23 18:44:47.556097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.680 [2024-07-23 18:44:47.556131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:47.680 [2024-07-23 18:44:47.556158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:26:47.680 [2024-07-23 18:44:47.556202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.680 [2024-07-23 18:44:47.557705] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 86.886 ms, result 0 00:27:18.567  Copying: 33/1024 [MB] (33 MBps) Copying: 67/1024 [MB] (33 MBps) Copying: 99/1024 [MB] (32 MBps) Copying: 130/1024 [MB] (30 MBps) Copying: 161/1024 [MB] (31 MBps) Copying: 193/1024 [MB] (31 MBps) Copying: 224/1024 [MB] (31 MBps) Copying: 258/1024 [MB] (33 MBps) Copying: 290/1024 [MB] (32 MBps) Copying: 323/1024 [MB] (32 MBps) Copying: 356/1024 [MB] (32 MBps) Copying: 390/1024 [MB] (34 MBps) Copying: 424/1024 [MB] (34 MBps) Copying: 458/1024 [MB] (33 MBps) Copying: 491/1024 [MB] (33 MBps) Copying: 525/1024 [MB] (33 MBps) Copying: 559/1024 [MB] (33 MBps) Copying: 592/1024 [MB] (32 MBps) Copying: 625/1024 [MB] (33 MBps) Copying: 662/1024 [MB] (36 MBps) Copying: 700/1024 [MB] (37 MBps) Copying: 738/1024 [MB] (38 MBps) Copying: 774/1024 [MB] (36 MBps) Copying: 808/1024 [MB] (33 MBps) Copying: 844/1024 [MB] (36 MBps) Copying: 879/1024 [MB] (34 MBps) Copying: 912/1024 [MB] (33 MBps) Copying: 948/1024 [MB] (36 MBps) Copying: 984/1024 [MB] (36 MBps) Copying: 1019/1024 [MB] (34 MBps) Copying: 1024/1024 [MB] (average 34 MBps)[2024-07-23 18:45:18.327252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.567 [2024-07-23 18:45:18.327644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:18.567 [2024-07-23 18:45:18.327847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:27:18.567 [2024-07-23 18:45:18.327956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.567 [2024-07-23 18:45:18.328133] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:18.567 [2024-07-23 18:45:18.329886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.567 [2024-07-23 18:45:18.329993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:18.567 [2024-07-23 18:45:18.330059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.585 ms 00:27:18.567 [2024-07-23 18:45:18.330129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.567 [2024-07-23 18:45:18.330658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.567 [2024-07-23 18:45:18.330748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:18.567 [2024-07-23 18:45:18.330821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.435 ms 00:27:18.567 [2024-07-23 18:45:18.330890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.567 [2024-07-23 18:45:18.331012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.567 [2024-07-23 18:45:18.331080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:27:18.567 [2024-07-23 18:45:18.331148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:27:18.567 [2024-07-23 18:45:18.331211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.567 [2024-07-23 18:45:18.331358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.568 [2024-07-23 18:45:18.331427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:27:18.568 [2024-07-23 18:45:18.331495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:27:18.568 [2024-07-23 18:45:18.331587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.568 [2024-07-23 18:45:18.331670] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:18.568 [2024-07-23 18:45:18.331777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:18.568 [2024-07-23 18:45:18.331884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:18.568 [2024-07-23 18:45:18.331972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:18.568 [2024-07-23 18:45:18.332737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:18.568 [2024-07-23 18:45:18.332763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:18.568 [2024-07-23 18:45:18.332784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:18.568 [2024-07-23 18:45:18.332803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:18.568 [2024-07-23 18:45:18.332823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:18.568 [2024-07-23 18:45:18.332842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:18.568 [2024-07-23 18:45:18.332861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:18.568 [2024-07-23 18:45:18.332880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:18.568 [2024-07-23 18:45:18.332899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:18.568 [2024-07-23 18:45:18.332919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:18.568 [2024-07-23 18:45:18.332938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:18.568 [2024-07-23 18:45:18.332958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:18.568 [2024-07-23 18:45:18.332976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:18.568 [2024-07-23 18:45:18.332995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:18.568 [2024-07-23 18:45:18.333013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:18.568 [2024-07-23 18:45:18.333032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:18.568 [2024-07-23 18:45:18.333051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:18.568 [2024-07-23 18:45:18.333070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:18.568 [2024-07-23 18:45:18.333088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:18.568 [2024-07-23 18:45:18.333106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:18.568 [2024-07-23 18:45:18.333126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:18.568 [2024-07-23 18:45:18.333145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:18.568 [2024-07-23 18:45:18.333164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:18.568 [2024-07-23 18:45:18.333182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:18.568 [2024-07-23 18:45:18.333201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:18.568 [2024-07-23 18:45:18.333220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:18.568 [2024-07-23 18:45:18.333238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:18.568 [2024-07-23 18:45:18.333257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:18.568 [2024-07-23 18:45:18.333275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:18.568 [2024-07-23 18:45:18.333294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:18.568 [2024-07-23 18:45:18.333316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:18.568 [2024-07-23 18:45:18.333334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:18.568 [2024-07-23 18:45:18.333353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:18.568 [2024-07-23 18:45:18.333372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:18.568 [2024-07-23 18:45:18.333391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:18.568 [2024-07-23 18:45:18.333409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:18.568 [2024-07-23 18:45:18.333428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:18.568 [2024-07-23 18:45:18.333447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:18.568 [2024-07-23 18:45:18.333466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:18.568 [2024-07-23 18:45:18.333485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:18.568 [2024-07-23 18:45:18.333504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:18.568 [2024-07-23 18:45:18.333523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:18.568 [2024-07-23 18:45:18.333541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:18.568 [2024-07-23 18:45:18.333560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:18.568 [2024-07-23 18:45:18.333597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:18.568 [2024-07-23 18:45:18.333617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:18.568 [2024-07-23 18:45:18.333636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:18.568 [2024-07-23 18:45:18.333654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:18.568 [2024-07-23 18:45:18.333674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:18.568 [2024-07-23 18:45:18.333692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:18.568 [2024-07-23 18:45:18.333711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:18.568 [2024-07-23 18:45:18.333731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:18.568 [2024-07-23 18:45:18.333750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:18.568 [2024-07-23 18:45:18.333769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:18.568 [2024-07-23 18:45:18.333788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:18.568 [2024-07-23 18:45:18.333806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:18.568 [2024-07-23 18:45:18.333827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:18.568 [2024-07-23 18:45:18.333846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:18.568 [2024-07-23 18:45:18.333864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:18.568 [2024-07-23 18:45:18.333884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:18.568 [2024-07-23 18:45:18.333902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:18.568 [2024-07-23 18:45:18.333921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:18.568 [2024-07-23 18:45:18.333939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:18.568 [2024-07-23 18:45:18.333957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:18.568 [2024-07-23 18:45:18.333976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:18.568 [2024-07-23 18:45:18.333994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:18.568 [2024-07-23 18:45:18.334012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:18.568 [2024-07-23 18:45:18.334031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:18.568 [2024-07-23 18:45:18.334050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:18.568 [2024-07-23 18:45:18.334068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:18.568 [2024-07-23 18:45:18.334087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:18.568 [2024-07-23 18:45:18.334108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:18.568 [2024-07-23 18:45:18.334127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:18.568 [2024-07-23 18:45:18.334145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:18.568 [2024-07-23 18:45:18.334165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:18.568 [2024-07-23 18:45:18.334184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:18.568 [2024-07-23 18:45:18.334202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:18.568 [2024-07-23 18:45:18.334221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:18.568 [2024-07-23 18:45:18.334240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:18.568 [2024-07-23 18:45:18.334259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:18.568 [2024-07-23 18:45:18.334278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:18.569 [2024-07-23 18:45:18.334296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:18.569 [2024-07-23 18:45:18.334315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:18.569 [2024-07-23 18:45:18.334334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:18.569 [2024-07-23 18:45:18.334352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:18.569 [2024-07-23 18:45:18.334371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:18.569 [2024-07-23 18:45:18.334389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:18.569 [2024-07-23 18:45:18.334407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:18.569 [2024-07-23 18:45:18.334427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:18.569 [2024-07-23 18:45:18.334445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:18.569 [2024-07-23 18:45:18.334464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:18.569 [2024-07-23 18:45:18.334483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:18.569 [2024-07-23 18:45:18.334502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:18.569 [2024-07-23 18:45:18.334520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:18.569 [2024-07-23 18:45:18.334538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:18.569 [2024-07-23 18:45:18.334557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:18.569 [2024-07-23 18:45:18.334591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:18.569 [2024-07-23 18:45:18.334628] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:18.569 [2024-07-23 18:45:18.334686] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 13fadea6-e45b-447a-8eb6-e8ad4e453dcb 00:27:18.569 [2024-07-23 18:45:18.334705] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:18.569 [2024-07-23 18:45:18.334724] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:27:18.569 [2024-07-23 18:45:18.334759] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:18.569 [2024-07-23 18:45:18.334794] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:18.569 [2024-07-23 18:45:18.334813] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:18.569 [2024-07-23 18:45:18.334831] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:18.569 [2024-07-23 18:45:18.334849] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:18.569 [2024-07-23 18:45:18.334866] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:18.569 [2024-07-23 18:45:18.334882] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:18.569 [2024-07-23 18:45:18.335235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.569 [2024-07-23 18:45:18.335254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:18.569 [2024-07-23 18:45:18.335273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.574 ms 00:27:18.569 [2024-07-23 18:45:18.335304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.569 [2024-07-23 18:45:18.339309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.569 [2024-07-23 18:45:18.339363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:18.569 [2024-07-23 18:45:18.339408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.970 ms 00:27:18.569 [2024-07-23 18:45:18.339427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.569 [2024-07-23 18:45:18.339682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.569 [2024-07-23 18:45:18.339725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:18.569 [2024-07-23 18:45:18.339754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.212 ms 00:27:18.569 [2024-07-23 18:45:18.339779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.569 [2024-07-23 18:45:18.350285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:18.569 [2024-07-23 18:45:18.350354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:18.569 [2024-07-23 18:45:18.350391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:18.569 [2024-07-23 18:45:18.350414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.569 [2024-07-23 18:45:18.350496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:18.569 [2024-07-23 18:45:18.350530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:18.569 [2024-07-23 18:45:18.350585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:18.569 [2024-07-23 18:45:18.350620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.569 [2024-07-23 18:45:18.350702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:18.569 [2024-07-23 18:45:18.350744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:18.569 [2024-07-23 18:45:18.350775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:18.569 [2024-07-23 18:45:18.350806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.569 [2024-07-23 18:45:18.350847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:18.569 [2024-07-23 18:45:18.350883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:18.569 [2024-07-23 18:45:18.350912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:18.569 [2024-07-23 18:45:18.350948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.569 [2024-07-23 18:45:18.374639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:18.569 [2024-07-23 18:45:18.374729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:18.569 [2024-07-23 18:45:18.374761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:18.569 [2024-07-23 18:45:18.374780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.569 [2024-07-23 18:45:18.389750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:18.569 [2024-07-23 18:45:18.389837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:18.569 [2024-07-23 18:45:18.389869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:18.569 [2024-07-23 18:45:18.389887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.569 [2024-07-23 18:45:18.389957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:18.569 [2024-07-23 18:45:18.389977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:18.569 [2024-07-23 18:45:18.390014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:18.569 [2024-07-23 18:45:18.390042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.569 [2024-07-23 18:45:18.390098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:18.569 [2024-07-23 18:45:18.390127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:18.569 [2024-07-23 18:45:18.390151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:18.569 [2024-07-23 18:45:18.390186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.569 [2024-07-23 18:45:18.390280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:18.569 [2024-07-23 18:45:18.390313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:18.569 [2024-07-23 18:45:18.390338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:18.569 [2024-07-23 18:45:18.390361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.569 [2024-07-23 18:45:18.390407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:18.569 [2024-07-23 18:45:18.390437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:18.569 [2024-07-23 18:45:18.390462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:18.569 [2024-07-23 18:45:18.390480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.569 [2024-07-23 18:45:18.390545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:18.569 [2024-07-23 18:45:18.390619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:18.569 [2024-07-23 18:45:18.390644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:18.569 [2024-07-23 18:45:18.390669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.569 [2024-07-23 18:45:18.390739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:18.569 [2024-07-23 18:45:18.390773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:18.569 [2024-07-23 18:45:18.390799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:18.569 [2024-07-23 18:45:18.390825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.569 [2024-07-23 18:45:18.390995] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 63.877 ms, result 0 00:27:18.828 00:27:18.828 00:27:18.828 18:45:18 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:20.736 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:27:20.736 18:45:20 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:27:20.736 [2024-07-23 18:45:20.478248] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:27:20.736 [2024-07-23 18:45:20.478366] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95889 ] 00:27:20.736 [2024-07-23 18:45:20.624073] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:20.736 [2024-07-23 18:45:20.698058] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:20.997 [2024-07-23 18:45:20.847963] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:20.997 [2024-07-23 18:45:20.848033] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:20.997 [2024-07-23 18:45:20.998276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.997 [2024-07-23 18:45:20.998324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:20.997 [2024-07-23 18:45:20.998338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:20.997 [2024-07-23 18:45:20.998345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.997 [2024-07-23 18:45:20.998387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.997 [2024-07-23 18:45:20.998396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:20.997 [2024-07-23 18:45:20.998403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:27:20.997 [2024-07-23 18:45:20.998414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.997 [2024-07-23 18:45:20.998428] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:20.997 [2024-07-23 18:45:20.998663] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:20.997 [2024-07-23 18:45:20.998683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.997 [2024-07-23 18:45:20.998694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:20.997 [2024-07-23 18:45:20.998703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:27:20.997 [2024-07-23 18:45:20.998710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.997 [2024-07-23 18:45:20.999004] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:27:20.997 [2024-07-23 18:45:20.999030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.997 [2024-07-23 18:45:20.999047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:20.997 [2024-07-23 18:45:20.999055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:27:20.997 [2024-07-23 18:45:20.999073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.997 [2024-07-23 18:45:20.999129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.997 [2024-07-23 18:45:20.999144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:20.997 [2024-07-23 18:45:20.999152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:27:20.997 [2024-07-23 18:45:20.999159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.997 [2024-07-23 18:45:20.999372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.997 [2024-07-23 18:45:20.999382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:20.997 [2024-07-23 18:45:20.999390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.184 ms 00:27:20.997 [2024-07-23 18:45:20.999401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.997 [2024-07-23 18:45:20.999480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.997 [2024-07-23 18:45:20.999491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:20.997 [2024-07-23 18:45:20.999498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:27:20.997 [2024-07-23 18:45:20.999505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.997 [2024-07-23 18:45:20.999528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.997 [2024-07-23 18:45:20.999536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:20.997 [2024-07-23 18:45:20.999543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:27:20.997 [2024-07-23 18:45:20.999564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.997 [2024-07-23 18:45:20.999605] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:20.997 [2024-07-23 18:45:21.002387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.997 [2024-07-23 18:45:21.002438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:20.997 [2024-07-23 18:45:21.002466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.792 ms 00:27:20.997 [2024-07-23 18:45:21.002495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.997 [2024-07-23 18:45:21.002560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.997 [2024-07-23 18:45:21.002623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:20.997 [2024-07-23 18:45:21.002653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:27:20.997 [2024-07-23 18:45:21.002672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.997 [2024-07-23 18:45:21.002733] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:20.997 [2024-07-23 18:45:21.002776] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:20.997 [2024-07-23 18:45:21.002841] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:20.997 [2024-07-23 18:45:21.002896] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:27:20.997 [2024-07-23 18:45:21.003005] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:20.997 [2024-07-23 18:45:21.003042] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:20.997 [2024-07-23 18:45:21.003099] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:27:20.997 [2024-07-23 18:45:21.003143] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:20.997 [2024-07-23 18:45:21.003181] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:20.997 [2024-07-23 18:45:21.003223] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:20.997 [2024-07-23 18:45:21.003252] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:20.997 [2024-07-23 18:45:21.003279] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:20.997 [2024-07-23 18:45:21.003315] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:20.997 [2024-07-23 18:45:21.003353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.997 [2024-07-23 18:45:21.003378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:20.997 [2024-07-23 18:45:21.003404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.621 ms 00:27:20.997 [2024-07-23 18:45:21.003433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.997 [2024-07-23 18:45:21.003520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.997 [2024-07-23 18:45:21.003563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:20.997 [2024-07-23 18:45:21.003619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:27:20.997 [2024-07-23 18:45:21.003651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.997 [2024-07-23 18:45:21.003751] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:20.997 [2024-07-23 18:45:21.003782] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:20.997 [2024-07-23 18:45:21.003808] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:20.997 [2024-07-23 18:45:21.003837] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:20.997 [2024-07-23 18:45:21.003873] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:20.997 [2024-07-23 18:45:21.003900] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:20.997 [2024-07-23 18:45:21.003925] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:20.998 [2024-07-23 18:45:21.003957] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:20.998 [2024-07-23 18:45:21.003982] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:20.998 [2024-07-23 18:45:21.004006] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:20.998 [2024-07-23 18:45:21.004032] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:20.998 [2024-07-23 18:45:21.004061] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:20.998 [2024-07-23 18:45:21.004086] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:20.998 [2024-07-23 18:45:21.004114] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:20.998 [2024-07-23 18:45:21.004141] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:20.998 [2024-07-23 18:45:21.004169] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:20.998 [2024-07-23 18:45:21.004197] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:20.998 [2024-07-23 18:45:21.004222] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:20.998 [2024-07-23 18:45:21.004248] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:20.998 [2024-07-23 18:45:21.004276] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:20.998 [2024-07-23 18:45:21.004301] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:20.998 [2024-07-23 18:45:21.004329] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:20.998 [2024-07-23 18:45:21.004357] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:20.998 [2024-07-23 18:45:21.004385] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:20.998 [2024-07-23 18:45:21.004410] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:20.998 [2024-07-23 18:45:21.004434] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:20.998 [2024-07-23 18:45:21.004457] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:20.998 [2024-07-23 18:45:21.004482] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:20.998 [2024-07-23 18:45:21.004506] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:20.998 [2024-07-23 18:45:21.004537] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:20.998 [2024-07-23 18:45:21.004562] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:20.998 [2024-07-23 18:45:21.004595] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:20.998 [2024-07-23 18:45:21.004619] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:20.998 [2024-07-23 18:45:21.004650] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:20.998 [2024-07-23 18:45:21.004673] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:20.998 [2024-07-23 18:45:21.004698] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:20.998 [2024-07-23 18:45:21.004726] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:20.998 [2024-07-23 18:45:21.004752] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:20.998 [2024-07-23 18:45:21.004779] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:20.998 [2024-07-23 18:45:21.004809] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:20.998 [2024-07-23 18:45:21.004834] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:20.998 [2024-07-23 18:45:21.004861] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:20.998 [2024-07-23 18:45:21.004885] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:20.998 [2024-07-23 18:45:21.004914] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:20.998 [2024-07-23 18:45:21.004933] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:20.998 [2024-07-23 18:45:21.004965] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:20.998 [2024-07-23 18:45:21.004996] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:20.998 [2024-07-23 18:45:21.005022] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:20.998 [2024-07-23 18:45:21.005048] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:20.998 [2024-07-23 18:45:21.005075] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:20.998 [2024-07-23 18:45:21.005100] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:20.998 [2024-07-23 18:45:21.005125] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:20.998 [2024-07-23 18:45:21.005153] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:20.998 [2024-07-23 18:45:21.005183] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:20.998 [2024-07-23 18:45:21.005219] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:20.998 [2024-07-23 18:45:21.005268] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:20.998 [2024-07-23 18:45:21.005308] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:20.998 [2024-07-23 18:45:21.005347] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:20.998 [2024-07-23 18:45:21.005386] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:20.998 [2024-07-23 18:45:21.005423] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:20.998 [2024-07-23 18:45:21.005460] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:20.998 [2024-07-23 18:45:21.005502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:20.998 [2024-07-23 18:45:21.005538] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:20.998 [2024-07-23 18:45:21.005590] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:20.998 [2024-07-23 18:45:21.005629] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:20.998 [2024-07-23 18:45:21.005670] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:20.998 [2024-07-23 18:45:21.005706] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:20.998 [2024-07-23 18:45:21.005754] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:20.998 [2024-07-23 18:45:21.005788] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:20.998 [2024-07-23 18:45:21.005832] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:20.998 [2024-07-23 18:45:21.005883] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:20.998 [2024-07-23 18:45:21.005924] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:20.998 [2024-07-23 18:45:21.005962] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:20.998 [2024-07-23 18:45:21.006015] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:20.998 [2024-07-23 18:45:21.006062] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:20.998 [2024-07-23 18:45:21.006110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.998 [2024-07-23 18:45:21.006144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:20.998 [2024-07-23 18:45:21.006170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.413 ms 00:27:20.998 [2024-07-23 18:45:21.006196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.998 [2024-07-23 18:45:21.029357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.998 [2024-07-23 18:45:21.029636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:20.998 [2024-07-23 18:45:21.029771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.131 ms 00:27:20.998 [2024-07-23 18:45:21.029929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.998 [2024-07-23 18:45:21.030303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.998 [2024-07-23 18:45:21.030469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:20.998 [2024-07-23 18:45:21.030600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.213 ms 00:27:20.998 [2024-07-23 18:45:21.030704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.257 [2024-07-23 18:45:21.052086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.257 [2024-07-23 18:45:21.052216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:21.257 [2024-07-23 18:45:21.052281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.180 ms 00:27:21.257 [2024-07-23 18:45:21.052338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.257 [2024-07-23 18:45:21.052435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.257 [2024-07-23 18:45:21.052527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:21.257 [2024-07-23 18:45:21.052621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:21.257 [2024-07-23 18:45:21.052678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.257 [2024-07-23 18:45:21.052893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.257 [2024-07-23 18:45:21.052967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:21.257 [2024-07-23 18:45:21.053027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:27:21.257 [2024-07-23 18:45:21.053046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.257 [2024-07-23 18:45:21.053266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.257 [2024-07-23 18:45:21.053288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:21.257 [2024-07-23 18:45:21.053305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.180 ms 00:27:21.257 [2024-07-23 18:45:21.053319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.257 [2024-07-23 18:45:21.064308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.257 [2024-07-23 18:45:21.064350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:21.257 [2024-07-23 18:45:21.064364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.971 ms 00:27:21.257 [2024-07-23 18:45:21.064375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.257 [2024-07-23 18:45:21.064554] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:21.257 [2024-07-23 18:45:21.064608] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:21.257 [2024-07-23 18:45:21.064622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.257 [2024-07-23 18:45:21.064632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:21.257 [2024-07-23 18:45:21.064648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:27:21.257 [2024-07-23 18:45:21.064658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.257 [2024-07-23 18:45:21.075175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.257 [2024-07-23 18:45:21.075203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:21.257 [2024-07-23 18:45:21.075226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.515 ms 00:27:21.257 [2024-07-23 18:45:21.075234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.257 [2024-07-23 18:45:21.075350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.257 [2024-07-23 18:45:21.075359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:21.257 [2024-07-23 18:45:21.075375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:27:21.257 [2024-07-23 18:45:21.075385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.257 [2024-07-23 18:45:21.075426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.257 [2024-07-23 18:45:21.075438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:21.257 [2024-07-23 18:45:21.075446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:27:21.257 [2024-07-23 18:45:21.075453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.257 [2024-07-23 18:45:21.075758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.257 [2024-07-23 18:45:21.075775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:21.257 [2024-07-23 18:45:21.075784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.266 ms 00:27:21.257 [2024-07-23 18:45:21.075790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.257 [2024-07-23 18:45:21.075810] mngt/ftl_mngt_p2l.c: 132:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:27:21.257 [2024-07-23 18:45:21.075821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.257 [2024-07-23 18:45:21.075833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:21.257 [2024-07-23 18:45:21.075840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:27:21.257 [2024-07-23 18:45:21.075851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.257 [2024-07-23 18:45:21.083908] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:21.257 [2024-07-23 18:45:21.084036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.257 [2024-07-23 18:45:21.084047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:21.257 [2024-07-23 18:45:21.084067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.182 ms 00:27:21.257 [2024-07-23 18:45:21.084079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.257 [2024-07-23 18:45:21.086270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.257 [2024-07-23 18:45:21.086297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:21.257 [2024-07-23 18:45:21.086306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.174 ms 00:27:21.257 [2024-07-23 18:45:21.086313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.257 [2024-07-23 18:45:21.086389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.257 [2024-07-23 18:45:21.086399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:21.257 [2024-07-23 18:45:21.086407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:27:21.257 [2024-07-23 18:45:21.086416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.257 [2024-07-23 18:45:21.086451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.257 [2024-07-23 18:45:21.086459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:21.257 [2024-07-23 18:45:21.086466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:21.257 [2024-07-23 18:45:21.086474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.257 [2024-07-23 18:45:21.086506] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:21.257 [2024-07-23 18:45:21.086515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.257 [2024-07-23 18:45:21.086531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:21.257 [2024-07-23 18:45:21.086546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:27:21.257 [2024-07-23 18:45:21.086552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.257 [2024-07-23 18:45:21.091856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.257 [2024-07-23 18:45:21.091889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:21.257 [2024-07-23 18:45:21.091912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.275 ms 00:27:21.257 [2024-07-23 18:45:21.091919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.257 [2024-07-23 18:45:21.091982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.257 [2024-07-23 18:45:21.091991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:21.257 [2024-07-23 18:45:21.092000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:27:21.257 [2024-07-23 18:45:21.092006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.257 [2024-07-23 18:45:21.093436] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 94.872 ms, result 0 00:27:56.572  Copying: 27/1024 [MB] (27 MBps) Copying: 56/1024 [MB] (29 MBps) Copying: 85/1024 [MB] (28 MBps) Copying: 114/1024 [MB] (28 MBps) Copying: 143/1024 [MB] (28 MBps) Copying: 172/1024 [MB] (29 MBps) Copying: 201/1024 [MB] (28 MBps) Copying: 231/1024 [MB] (30 MBps) Copying: 261/1024 [MB] (30 MBps) Copying: 291/1024 [MB] (30 MBps) Copying: 321/1024 [MB] (30 MBps) Copying: 351/1024 [MB] (29 MBps) Copying: 381/1024 [MB] (29 MBps) Copying: 411/1024 [MB] (30 MBps) Copying: 441/1024 [MB] (29 MBps) Copying: 470/1024 [MB] (29 MBps) Copying: 499/1024 [MB] (29 MBps) Copying: 529/1024 [MB] (29 MBps) Copying: 559/1024 [MB] (30 MBps) Copying: 589/1024 [MB] (29 MBps) Copying: 619/1024 [MB] (29 MBps) Copying: 648/1024 [MB] (29 MBps) Copying: 678/1024 [MB] (30 MBps) Copying: 708/1024 [MB] (29 MBps) Copying: 738/1024 [MB] (29 MBps) Copying: 768/1024 [MB] (30 MBps) Copying: 797/1024 [MB] (29 MBps) Copying: 826/1024 [MB] (28 MBps) Copying: 856/1024 [MB] (30 MBps) Copying: 887/1024 [MB] (30 MBps) Copying: 917/1024 [MB] (29 MBps) Copying: 947/1024 [MB] (30 MBps) Copying: 977/1024 [MB] (30 MBps) Copying: 1007/1024 [MB] (30 MBps) Copying: 1023/1024 [MB] (15 MBps) Copying: 1024/1024 [MB] (average 28 MBps)[2024-07-23 18:45:56.476215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.572 [2024-07-23 18:45:56.476312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:56.572 [2024-07-23 18:45:56.476329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:56.572 [2024-07-23 18:45:56.476338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.572 [2024-07-23 18:45:56.477397] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:56.572 [2024-07-23 18:45:56.480854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.572 [2024-07-23 18:45:56.480888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:56.572 [2024-07-23 18:45:56.480900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.431 ms 00:27:56.572 [2024-07-23 18:45:56.480909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.572 [2024-07-23 18:45:56.489665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.572 [2024-07-23 18:45:56.489697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:56.572 [2024-07-23 18:45:56.489708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.894 ms 00:27:56.572 [2024-07-23 18:45:56.489730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.572 [2024-07-23 18:45:56.489763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.572 [2024-07-23 18:45:56.489772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:27:56.572 [2024-07-23 18:45:56.489781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:56.572 [2024-07-23 18:45:56.489795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.572 [2024-07-23 18:45:56.489853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.572 [2024-07-23 18:45:56.489864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:27:56.572 [2024-07-23 18:45:56.489871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:27:56.572 [2024-07-23 18:45:56.489885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.572 [2024-07-23 18:45:56.489899] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:56.572 [2024-07-23 18:45:56.489910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 128768 / 261120 wr_cnt: 1 state: open 00:27:56.572 [2024-07-23 18:45:56.489919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:56.572 [2024-07-23 18:45:56.489927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:56.572 [2024-07-23 18:45:56.489935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:56.572 [2024-07-23 18:45:56.489942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:56.572 [2024-07-23 18:45:56.489949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:56.572 [2024-07-23 18:45:56.489955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:56.572 [2024-07-23 18:45:56.489962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:56.572 [2024-07-23 18:45:56.489969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:56.572 [2024-07-23 18:45:56.489976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:56.572 [2024-07-23 18:45:56.489983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:56.572 [2024-07-23 18:45:56.489990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:56.572 [2024-07-23 18:45:56.489996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:56.572 [2024-07-23 18:45:56.490002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:56.572 [2024-07-23 18:45:56.490008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:56.572 [2024-07-23 18:45:56.490016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:56.572 [2024-07-23 18:45:56.490022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:56.572 [2024-07-23 18:45:56.490029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:56.572 [2024-07-23 18:45:56.490035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:56.572 [2024-07-23 18:45:56.490042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:56.572 [2024-07-23 18:45:56.490049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:56.572 [2024-07-23 18:45:56.490056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:56.572 [2024-07-23 18:45:56.490063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:56.572 [2024-07-23 18:45:56.490070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:56.572 [2024-07-23 18:45:56.490077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:56.572 [2024-07-23 18:45:56.490084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:56.572 [2024-07-23 18:45:56.490092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:56.572 [2024-07-23 18:45:56.490098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:56.572 [2024-07-23 18:45:56.490104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:56.572 [2024-07-23 18:45:56.490111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:56.572 [2024-07-23 18:45:56.490117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:56.572 [2024-07-23 18:45:56.490124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:56.572 [2024-07-23 18:45:56.490131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:56.572 [2024-07-23 18:45:56.490138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:56.572 [2024-07-23 18:45:56.490145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:56.572 [2024-07-23 18:45:56.490151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:56.572 [2024-07-23 18:45:56.490158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:56.572 [2024-07-23 18:45:56.490165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:56.573 [2024-07-23 18:45:56.490171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:56.573 [2024-07-23 18:45:56.490178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:56.573 [2024-07-23 18:45:56.490184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:56.573 [2024-07-23 18:45:56.490191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:56.573 [2024-07-23 18:45:56.490197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:56.573 [2024-07-23 18:45:56.490205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:56.573 [2024-07-23 18:45:56.490211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:56.573 [2024-07-23 18:45:56.490218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:56.573 [2024-07-23 18:45:56.490224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:56.573 [2024-07-23 18:45:56.490231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:56.573 [2024-07-23 18:45:56.490237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:56.573 [2024-07-23 18:45:56.490244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:56.573 [2024-07-23 18:45:56.490251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:56.573 [2024-07-23 18:45:56.490257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:56.573 [2024-07-23 18:45:56.490264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:56.573 [2024-07-23 18:45:56.490271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:56.573 [2024-07-23 18:45:56.490278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:56.573 [2024-07-23 18:45:56.490285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:56.573 [2024-07-23 18:45:56.490292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:56.573 [2024-07-23 18:45:56.490299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:56.573 [2024-07-23 18:45:56.490305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:56.573 [2024-07-23 18:45:56.490312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:56.573 [2024-07-23 18:45:56.490319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:56.573 [2024-07-23 18:45:56.490326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:56.573 [2024-07-23 18:45:56.490333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:56.573 [2024-07-23 18:45:56.490340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:56.573 [2024-07-23 18:45:56.490347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:56.573 [2024-07-23 18:45:56.490353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:56.573 [2024-07-23 18:45:56.490360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:56.573 [2024-07-23 18:45:56.490366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:56.573 [2024-07-23 18:45:56.490372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:56.573 [2024-07-23 18:45:56.490379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:56.573 [2024-07-23 18:45:56.490385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:56.573 [2024-07-23 18:45:56.490392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:56.573 [2024-07-23 18:45:56.490399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:56.573 [2024-07-23 18:45:56.490406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:56.573 [2024-07-23 18:45:56.490412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:56.573 [2024-07-23 18:45:56.490420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:56.573 [2024-07-23 18:45:56.490427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:56.573 [2024-07-23 18:45:56.490433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:56.573 [2024-07-23 18:45:56.490441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:56.573 [2024-07-23 18:45:56.490453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:56.573 [2024-07-23 18:45:56.490461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:56.573 [2024-07-23 18:45:56.490468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:56.573 [2024-07-23 18:45:56.490475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:56.573 [2024-07-23 18:45:56.490483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:56.573 [2024-07-23 18:45:56.490491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:56.573 [2024-07-23 18:45:56.490498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:56.573 [2024-07-23 18:45:56.490506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:56.573 [2024-07-23 18:45:56.490512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:56.573 [2024-07-23 18:45:56.490519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:56.573 [2024-07-23 18:45:56.490526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:56.573 [2024-07-23 18:45:56.490533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:56.573 [2024-07-23 18:45:56.490539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:56.573 [2024-07-23 18:45:56.490546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:56.573 [2024-07-23 18:45:56.490553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:56.573 [2024-07-23 18:45:56.490560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:56.573 [2024-07-23 18:45:56.490577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:56.573 [2024-07-23 18:45:56.490585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:56.573 [2024-07-23 18:45:56.490592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:56.573 [2024-07-23 18:45:56.490599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:56.573 [2024-07-23 18:45:56.490606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:56.573 [2024-07-23 18:45:56.490619] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:56.573 [2024-07-23 18:45:56.490639] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 13fadea6-e45b-447a-8eb6-e8ad4e453dcb 00:27:56.573 [2024-07-23 18:45:56.490647] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 128768 00:27:56.573 [2024-07-23 18:45:56.490660] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 128800 00:27:56.573 [2024-07-23 18:45:56.490673] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 128768 00:27:56.573 [2024-07-23 18:45:56.490680] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0002 00:27:56.573 [2024-07-23 18:45:56.490690] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:56.573 [2024-07-23 18:45:56.490708] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:56.573 [2024-07-23 18:45:56.490716] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:56.573 [2024-07-23 18:45:56.490722] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:56.573 [2024-07-23 18:45:56.490728] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:56.573 [2024-07-23 18:45:56.490735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.573 [2024-07-23 18:45:56.490748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:56.573 [2024-07-23 18:45:56.490762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.838 ms 00:27:56.573 [2024-07-23 18:45:56.490768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.573 [2024-07-23 18:45:56.493865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.573 [2024-07-23 18:45:56.493884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:56.573 [2024-07-23 18:45:56.493901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.087 ms 00:27:56.573 [2024-07-23 18:45:56.493908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.573 [2024-07-23 18:45:56.494083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.573 [2024-07-23 18:45:56.494096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:56.573 [2024-07-23 18:45:56.494103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.158 ms 00:27:56.573 [2024-07-23 18:45:56.494111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.573 [2024-07-23 18:45:56.503371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:56.573 [2024-07-23 18:45:56.503397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:56.573 [2024-07-23 18:45:56.503406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:56.573 [2024-07-23 18:45:56.503414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.573 [2024-07-23 18:45:56.503467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:56.573 [2024-07-23 18:45:56.503477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:56.573 [2024-07-23 18:45:56.503484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:56.573 [2024-07-23 18:45:56.503491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.573 [2024-07-23 18:45:56.503541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:56.574 [2024-07-23 18:45:56.503559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:56.574 [2024-07-23 18:45:56.503613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:56.574 [2024-07-23 18:45:56.503621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.574 [2024-07-23 18:45:56.503639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:56.574 [2024-07-23 18:45:56.503648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:56.574 [2024-07-23 18:45:56.503655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:56.574 [2024-07-23 18:45:56.503663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.574 [2024-07-23 18:45:56.525941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:56.574 [2024-07-23 18:45:56.525976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:56.574 [2024-07-23 18:45:56.525986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:56.574 [2024-07-23 18:45:56.525993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.574 [2024-07-23 18:45:56.539934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:56.574 [2024-07-23 18:45:56.539961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:56.574 [2024-07-23 18:45:56.539980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:56.574 [2024-07-23 18:45:56.539988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.574 [2024-07-23 18:45:56.540042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:56.574 [2024-07-23 18:45:56.540052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:56.574 [2024-07-23 18:45:56.540064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:56.574 [2024-07-23 18:45:56.540072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.574 [2024-07-23 18:45:56.540102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:56.574 [2024-07-23 18:45:56.540117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:56.574 [2024-07-23 18:45:56.540131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:56.574 [2024-07-23 18:45:56.540138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.574 [2024-07-23 18:45:56.540207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:56.574 [2024-07-23 18:45:56.540224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:56.574 [2024-07-23 18:45:56.540232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:56.574 [2024-07-23 18:45:56.540260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.574 [2024-07-23 18:45:56.540289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:56.574 [2024-07-23 18:45:56.540299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:56.574 [2024-07-23 18:45:56.540306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:56.574 [2024-07-23 18:45:56.540313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.574 [2024-07-23 18:45:56.540353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:56.574 [2024-07-23 18:45:56.540362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:56.574 [2024-07-23 18:45:56.540369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:56.574 [2024-07-23 18:45:56.540376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.574 [2024-07-23 18:45:56.540426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:56.574 [2024-07-23 18:45:56.540434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:56.574 [2024-07-23 18:45:56.540441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:56.574 [2024-07-23 18:45:56.540449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.574 [2024-07-23 18:45:56.540586] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 66.813 ms, result 0 00:27:57.510 00:27:57.510 00:27:57.510 18:45:57 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:27:57.511 [2024-07-23 18:45:57.458435] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:27:57.511 [2024-07-23 18:45:57.458675] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96264 ] 00:27:57.769 [2024-07-23 18:45:57.602520] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:57.769 [2024-07-23 18:45:57.670544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:57.769 [2024-07-23 18:45:57.819512] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:57.769 [2024-07-23 18:45:57.819620] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:58.029 [2024-07-23 18:45:57.969964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.029 [2024-07-23 18:45:57.970014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:58.029 [2024-07-23 18:45:57.970029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:58.029 [2024-07-23 18:45:57.970043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.029 [2024-07-23 18:45:57.970090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.029 [2024-07-23 18:45:57.970102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:58.029 [2024-07-23 18:45:57.970111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:27:58.029 [2024-07-23 18:45:57.970123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.029 [2024-07-23 18:45:57.970141] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:58.029 [2024-07-23 18:45:57.970337] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:58.029 [2024-07-23 18:45:57.970358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.029 [2024-07-23 18:45:57.970369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:58.029 [2024-07-23 18:45:57.970377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.222 ms 00:27:58.029 [2024-07-23 18:45:57.970384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.029 [2024-07-23 18:45:57.970646] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:27:58.029 [2024-07-23 18:45:57.970672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.029 [2024-07-23 18:45:57.970695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:58.029 [2024-07-23 18:45:57.970705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:27:58.029 [2024-07-23 18:45:57.970715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.029 [2024-07-23 18:45:57.970793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.029 [2024-07-23 18:45:57.970815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:58.029 [2024-07-23 18:45:57.970831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:27:58.029 [2024-07-23 18:45:57.970838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.029 [2024-07-23 18:45:57.971065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.029 [2024-07-23 18:45:57.971084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:58.029 [2024-07-23 18:45:57.971099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.194 ms 00:27:58.029 [2024-07-23 18:45:57.971110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.029 [2024-07-23 18:45:57.971189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.029 [2024-07-23 18:45:57.971202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:58.029 [2024-07-23 18:45:57.971210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:27:58.029 [2024-07-23 18:45:57.971217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.029 [2024-07-23 18:45:57.971245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.029 [2024-07-23 18:45:57.971253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:58.029 [2024-07-23 18:45:57.971260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:27:58.029 [2024-07-23 18:45:57.971273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.029 [2024-07-23 18:45:57.971299] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:58.029 [2024-07-23 18:45:57.974016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.029 [2024-07-23 18:45:57.974036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:58.029 [2024-07-23 18:45:57.974045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.732 ms 00:27:58.029 [2024-07-23 18:45:57.974053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.029 [2024-07-23 18:45:57.974084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.029 [2024-07-23 18:45:57.974093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:58.029 [2024-07-23 18:45:57.974103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:27:58.029 [2024-07-23 18:45:57.974111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.029 [2024-07-23 18:45:57.974155] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:58.029 [2024-07-23 18:45:57.974176] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:58.029 [2024-07-23 18:45:57.974224] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:58.029 [2024-07-23 18:45:57.974241] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:27:58.029 [2024-07-23 18:45:57.974320] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:58.029 [2024-07-23 18:45:57.974337] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:58.029 [2024-07-23 18:45:57.974346] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:27:58.029 [2024-07-23 18:45:57.974360] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:58.029 [2024-07-23 18:45:57.974369] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:58.029 [2024-07-23 18:45:57.974376] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:58.029 [2024-07-23 18:45:57.974384] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:58.029 [2024-07-23 18:45:57.974398] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:58.029 [2024-07-23 18:45:57.974405] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:58.029 [2024-07-23 18:45:57.974413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.029 [2024-07-23 18:45:57.974420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:58.029 [2024-07-23 18:45:57.974435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.261 ms 00:27:58.029 [2024-07-23 18:45:57.974449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.029 [2024-07-23 18:45:57.974514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.029 [2024-07-23 18:45:57.974526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:58.029 [2024-07-23 18:45:57.974541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:27:58.029 [2024-07-23 18:45:57.974547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.029 [2024-07-23 18:45:57.974643] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:58.029 [2024-07-23 18:45:57.974656] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:58.029 [2024-07-23 18:45:57.974672] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:58.029 [2024-07-23 18:45:57.974679] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:58.029 [2024-07-23 18:45:57.974693] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:58.029 [2024-07-23 18:45:57.974700] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:58.029 [2024-07-23 18:45:57.974706] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:58.029 [2024-07-23 18:45:57.974714] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:58.029 [2024-07-23 18:45:57.974721] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:58.029 [2024-07-23 18:45:57.974729] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:58.029 [2024-07-23 18:45:57.974736] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:58.029 [2024-07-23 18:45:57.974743] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:58.029 [2024-07-23 18:45:57.974752] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:58.029 [2024-07-23 18:45:57.974759] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:58.029 [2024-07-23 18:45:57.974765] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:58.029 [2024-07-23 18:45:57.974771] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:58.029 [2024-07-23 18:45:57.974777] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:58.029 [2024-07-23 18:45:57.974783] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:58.030 [2024-07-23 18:45:57.974789] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:58.030 [2024-07-23 18:45:57.974795] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:58.030 [2024-07-23 18:45:57.974801] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:58.030 [2024-07-23 18:45:57.974807] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:58.030 [2024-07-23 18:45:57.974812] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:58.030 [2024-07-23 18:45:57.974818] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:58.030 [2024-07-23 18:45:57.974824] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:58.030 [2024-07-23 18:45:57.974831] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:58.030 [2024-07-23 18:45:57.974837] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:58.030 [2024-07-23 18:45:57.974843] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:58.030 [2024-07-23 18:45:57.974852] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:58.030 [2024-07-23 18:45:57.974858] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:58.030 [2024-07-23 18:45:57.974863] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:58.030 [2024-07-23 18:45:57.974869] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:58.030 [2024-07-23 18:45:57.974875] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:58.030 [2024-07-23 18:45:57.974880] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:58.030 [2024-07-23 18:45:57.974886] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:58.030 [2024-07-23 18:45:57.974892] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:58.030 [2024-07-23 18:45:57.974898] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:58.030 [2024-07-23 18:45:57.974904] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:58.030 [2024-07-23 18:45:57.974910] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:58.030 [2024-07-23 18:45:57.974917] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:58.030 [2024-07-23 18:45:57.974922] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:58.030 [2024-07-23 18:45:57.974929] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:58.030 [2024-07-23 18:45:57.974935] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:58.030 [2024-07-23 18:45:57.974941] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:58.030 [2024-07-23 18:45:57.974951] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:58.030 [2024-07-23 18:45:57.974960] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:58.030 [2024-07-23 18:45:57.974967] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:58.030 [2024-07-23 18:45:57.974980] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:58.030 [2024-07-23 18:45:57.974987] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:58.030 [2024-07-23 18:45:57.974993] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:58.030 [2024-07-23 18:45:57.974999] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:58.030 [2024-07-23 18:45:57.975005] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:58.030 [2024-07-23 18:45:57.975012] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:58.030 [2024-07-23 18:45:57.975019] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:58.030 [2024-07-23 18:45:57.975034] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:58.030 [2024-07-23 18:45:57.975042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:58.030 [2024-07-23 18:45:57.975049] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:58.030 [2024-07-23 18:45:57.975057] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:58.030 [2024-07-23 18:45:57.975064] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:58.030 [2024-07-23 18:45:57.975071] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:58.030 [2024-07-23 18:45:57.975081] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:58.030 [2024-07-23 18:45:57.975088] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:58.030 [2024-07-23 18:45:57.975094] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:58.030 [2024-07-23 18:45:57.975100] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:58.030 [2024-07-23 18:45:57.975106] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:58.030 [2024-07-23 18:45:57.975113] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:58.030 [2024-07-23 18:45:57.975119] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:58.030 [2024-07-23 18:45:57.975126] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:58.030 [2024-07-23 18:45:57.975133] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:58.030 [2024-07-23 18:45:57.975139] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:58.030 [2024-07-23 18:45:57.975153] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:58.030 [2024-07-23 18:45:57.975162] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:58.030 [2024-07-23 18:45:57.975169] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:58.030 [2024-07-23 18:45:57.975176] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:58.030 [2024-07-23 18:45:57.975182] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:58.030 [2024-07-23 18:45:57.975200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.030 [2024-07-23 18:45:57.975210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:58.030 [2024-07-23 18:45:57.975217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.606 ms 00:27:58.030 [2024-07-23 18:45:57.975223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.030 [2024-07-23 18:45:58.001105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.030 [2024-07-23 18:45:58.001227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:58.030 [2024-07-23 18:45:58.001267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.878 ms 00:27:58.030 [2024-07-23 18:45:58.001305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.030 [2024-07-23 18:45:58.001622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.030 [2024-07-23 18:45:58.001669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:58.030 [2024-07-23 18:45:58.001701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.246 ms 00:27:58.030 [2024-07-23 18:45:58.001728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.030 [2024-07-23 18:45:58.023652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.030 [2024-07-23 18:45:58.023701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:58.030 [2024-07-23 18:45:58.023721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.820 ms 00:27:58.030 [2024-07-23 18:45:58.023761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.030 [2024-07-23 18:45:58.023826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.030 [2024-07-23 18:45:58.023846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:58.030 [2024-07-23 18:45:58.023862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:27:58.030 [2024-07-23 18:45:58.023875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.030 [2024-07-23 18:45:58.024041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.030 [2024-07-23 18:45:58.024069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:58.030 [2024-07-23 18:45:58.024085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:27:58.030 [2024-07-23 18:45:58.024099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.030 [2024-07-23 18:45:58.024289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.030 [2024-07-23 18:45:58.024317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:58.030 [2024-07-23 18:45:58.024333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.156 ms 00:27:58.030 [2024-07-23 18:45:58.024347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.030 [2024-07-23 18:45:58.035407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.030 [2024-07-23 18:45:58.035446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:58.030 [2024-07-23 18:45:58.035460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.048 ms 00:27:58.030 [2024-07-23 18:45:58.035469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.030 [2024-07-23 18:45:58.035653] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:27:58.030 [2024-07-23 18:45:58.035672] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:58.030 [2024-07-23 18:45:58.035685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.030 [2024-07-23 18:45:58.035698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:58.030 [2024-07-23 18:45:58.035713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:27:58.030 [2024-07-23 18:45:58.035722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.030 [2024-07-23 18:45:58.046697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.031 [2024-07-23 18:45:58.046725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:58.031 [2024-07-23 18:45:58.046734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.972 ms 00:27:58.031 [2024-07-23 18:45:58.046742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.031 [2024-07-23 18:45:58.046850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.031 [2024-07-23 18:45:58.046859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:58.031 [2024-07-23 18:45:58.046868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:27:58.031 [2024-07-23 18:45:58.046874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.031 [2024-07-23 18:45:58.046915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.031 [2024-07-23 18:45:58.046928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:58.031 [2024-07-23 18:45:58.046939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:27:58.031 [2024-07-23 18:45:58.046947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.031 [2024-07-23 18:45:58.047179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.031 [2024-07-23 18:45:58.047197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:58.031 [2024-07-23 18:45:58.047205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.194 ms 00:27:58.031 [2024-07-23 18:45:58.047211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.031 [2024-07-23 18:45:58.047227] mngt/ftl_mngt_p2l.c: 132:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:27:58.031 [2024-07-23 18:45:58.047237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.031 [2024-07-23 18:45:58.047249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:58.031 [2024-07-23 18:45:58.047269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:27:58.031 [2024-07-23 18:45:58.047277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.031 [2024-07-23 18:45:58.055402] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:58.031 [2024-07-23 18:45:58.055529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.031 [2024-07-23 18:45:58.055542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:58.031 [2024-07-23 18:45:58.055550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.247 ms 00:27:58.031 [2024-07-23 18:45:58.055561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.031 [2024-07-23 18:45:58.057755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.031 [2024-07-23 18:45:58.057783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:58.031 [2024-07-23 18:45:58.057792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.164 ms 00:27:58.031 [2024-07-23 18:45:58.057807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.031 [2024-07-23 18:45:58.057873] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:27:58.031 [2024-07-23 18:45:58.058408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.031 [2024-07-23 18:45:58.058427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:58.031 [2024-07-23 18:45:58.058439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.559 ms 00:27:58.031 [2024-07-23 18:45:58.058446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.031 [2024-07-23 18:45:58.058484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.031 [2024-07-23 18:45:58.058494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:58.031 [2024-07-23 18:45:58.058501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:58.031 [2024-07-23 18:45:58.058517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.031 [2024-07-23 18:45:58.058553] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:58.031 [2024-07-23 18:45:58.058563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.031 [2024-07-23 18:45:58.058581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:58.031 [2024-07-23 18:45:58.058588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:27:58.031 [2024-07-23 18:45:58.058598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.031 [2024-07-23 18:45:58.063787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.031 [2024-07-23 18:45:58.063823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:58.031 [2024-07-23 18:45:58.063841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.182 ms 00:27:58.031 [2024-07-23 18:45:58.063850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.031 [2024-07-23 18:45:58.063912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.031 [2024-07-23 18:45:58.063922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:58.031 [2024-07-23 18:45:58.063931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:27:58.031 [2024-07-23 18:45:58.063938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.031 [2024-07-23 18:45:58.070153] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 98.809 ms, result 0 00:28:29.874  Copying: 31/1024 [MB] (31 MBps) Copying: 63/1024 [MB] (32 MBps) Copying: 95/1024 [MB] (31 MBps) Copying: 127/1024 [MB] (32 MBps) Copying: 159/1024 [MB] (32 MBps) Copying: 192/1024 [MB] (32 MBps) Copying: 225/1024 [MB] (33 MBps) Copying: 258/1024 [MB] (33 MBps) Copying: 292/1024 [MB] (33 MBps) Copying: 326/1024 [MB] (33 MBps) Copying: 359/1024 [MB] (33 MBps) Copying: 393/1024 [MB] (33 MBps) Copying: 425/1024 [MB] (32 MBps) Copying: 458/1024 [MB] (32 MBps) Copying: 491/1024 [MB] (33 MBps) Copying: 524/1024 [MB] (33 MBps) Copying: 557/1024 [MB] (32 MBps) Copying: 589/1024 [MB] (32 MBps) Copying: 622/1024 [MB] (33 MBps) Copying: 653/1024 [MB] (31 MBps) Copying: 684/1024 [MB] (30 MBps) Copying: 717/1024 [MB] (32 MBps) Copying: 750/1024 [MB] (32 MBps) Copying: 783/1024 [MB] (33 MBps) Copying: 817/1024 [MB] (34 MBps) Copying: 851/1024 [MB] (33 MBps) Copying: 883/1024 [MB] (32 MBps) Copying: 915/1024 [MB] (32 MBps) Copying: 946/1024 [MB] (30 MBps) Copying: 977/1024 [MB] (30 MBps) Copying: 1010/1024 [MB] (32 MBps) Copying: 1024/1024 [MB] (average 32 MBps)[2024-07-23 18:46:29.763530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:29.874 [2024-07-23 18:46:29.763626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:29.874 [2024-07-23 18:46:29.763645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:29.874 [2024-07-23 18:46:29.763653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.874 [2024-07-23 18:46:29.763677] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:29.874 [2024-07-23 18:46:29.765409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:29.874 [2024-07-23 18:46:29.765430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:29.874 [2024-07-23 18:46:29.765440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.719 ms 00:28:29.874 [2024-07-23 18:46:29.765454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.874 [2024-07-23 18:46:29.765697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:29.874 [2024-07-23 18:46:29.765709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:29.874 [2024-07-23 18:46:29.765718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.225 ms 00:28:29.874 [2024-07-23 18:46:29.765739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.874 [2024-07-23 18:46:29.765775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:29.874 [2024-07-23 18:46:29.765785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:28:29.874 [2024-07-23 18:46:29.765794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:29.874 [2024-07-23 18:46:29.765803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.874 [2024-07-23 18:46:29.765866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:29.874 [2024-07-23 18:46:29.765879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:28:29.874 [2024-07-23 18:46:29.765887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:28:29.874 [2024-07-23 18:46:29.765895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.874 [2024-07-23 18:46:29.765909] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:29.874 [2024-07-23 18:46:29.765923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 133632 / 261120 wr_cnt: 1 state: open 00:28:29.874 [2024-07-23 18:46:29.765936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:29.874 [2024-07-23 18:46:29.765944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:29.874 [2024-07-23 18:46:29.765953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:29.874 [2024-07-23 18:46:29.765960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:29.874 [2024-07-23 18:46:29.765968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:29.874 [2024-07-23 18:46:29.765975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:29.874 [2024-07-23 18:46:29.765982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:29.874 [2024-07-23 18:46:29.765989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:29.874 [2024-07-23 18:46:29.765997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:29.874 [2024-07-23 18:46:29.766004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:29.874 [2024-07-23 18:46:29.766012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:29.874 [2024-07-23 18:46:29.766019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:29.874 [2024-07-23 18:46:29.766028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:29.874 [2024-07-23 18:46:29.766035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:29.874 [2024-07-23 18:46:29.766042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:29.874 [2024-07-23 18:46:29.766050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:29.874 [2024-07-23 18:46:29.766058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:29.874 [2024-07-23 18:46:29.766065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:29.874 [2024-07-23 18:46:29.766073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:29.874 [2024-07-23 18:46:29.766080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:29.874 [2024-07-23 18:46:29.766088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:29.874 [2024-07-23 18:46:29.766096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:29.874 [2024-07-23 18:46:29.766103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:29.874 [2024-07-23 18:46:29.766110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:29.874 [2024-07-23 18:46:29.766118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:29.874 [2024-07-23 18:46:29.766125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:29.874 [2024-07-23 18:46:29.766132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:29.874 [2024-07-23 18:46:29.766139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:29.874 [2024-07-23 18:46:29.766146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:29.874 [2024-07-23 18:46:29.766153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:29.874 [2024-07-23 18:46:29.766160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:29.874 [2024-07-23 18:46:29.766166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:29.874 [2024-07-23 18:46:29.766173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:29.874 [2024-07-23 18:46:29.766181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:29.874 [2024-07-23 18:46:29.766188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:29.874 [2024-07-23 18:46:29.766194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:29.874 [2024-07-23 18:46:29.766202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:29.874 [2024-07-23 18:46:29.766209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:29.874 [2024-07-23 18:46:29.766216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:29.874 [2024-07-23 18:46:29.766226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:29.875 [2024-07-23 18:46:29.766233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:29.875 [2024-07-23 18:46:29.766241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:29.875 [2024-07-23 18:46:29.766248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:29.875 [2024-07-23 18:46:29.766255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:29.875 [2024-07-23 18:46:29.766262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:29.875 [2024-07-23 18:46:29.766269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:29.875 [2024-07-23 18:46:29.766277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:29.875 [2024-07-23 18:46:29.766285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:29.875 [2024-07-23 18:46:29.766292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:29.875 [2024-07-23 18:46:29.766299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:29.875 [2024-07-23 18:46:29.766308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:29.875 [2024-07-23 18:46:29.766317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:29.875 [2024-07-23 18:46:29.766324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:29.875 [2024-07-23 18:46:29.766332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:29.875 [2024-07-23 18:46:29.766339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:29.875 [2024-07-23 18:46:29.766347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:29.875 [2024-07-23 18:46:29.766355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:29.875 [2024-07-23 18:46:29.766362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:29.875 [2024-07-23 18:46:29.766369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:29.875 [2024-07-23 18:46:29.766375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:29.875 [2024-07-23 18:46:29.766382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:29.875 [2024-07-23 18:46:29.766389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:29.875 [2024-07-23 18:46:29.766397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:29.875 [2024-07-23 18:46:29.766405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:29.875 [2024-07-23 18:46:29.766412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:29.875 [2024-07-23 18:46:29.766419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:29.875 [2024-07-23 18:46:29.766426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:29.875 [2024-07-23 18:46:29.766433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:29.875 [2024-07-23 18:46:29.766440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:29.875 [2024-07-23 18:46:29.766447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:29.875 [2024-07-23 18:46:29.766454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:29.875 [2024-07-23 18:46:29.766460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:29.875 [2024-07-23 18:46:29.766467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:29.875 [2024-07-23 18:46:29.766474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:29.875 [2024-07-23 18:46:29.766482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:29.875 [2024-07-23 18:46:29.766489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:29.875 [2024-07-23 18:46:29.766496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:29.875 [2024-07-23 18:46:29.766504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:29.875 [2024-07-23 18:46:29.766511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:29.875 [2024-07-23 18:46:29.766517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:29.875 [2024-07-23 18:46:29.766525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:29.875 [2024-07-23 18:46:29.766532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:29.875 [2024-07-23 18:46:29.766539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:29.875 [2024-07-23 18:46:29.766546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:29.875 [2024-07-23 18:46:29.766553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:29.875 [2024-07-23 18:46:29.766561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:29.875 [2024-07-23 18:46:29.766581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:29.875 [2024-07-23 18:46:29.766590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:29.875 [2024-07-23 18:46:29.766597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:29.875 [2024-07-23 18:46:29.766604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:29.875 [2024-07-23 18:46:29.766612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:29.875 [2024-07-23 18:46:29.766620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:29.875 [2024-07-23 18:46:29.766629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:29.875 [2024-07-23 18:46:29.766637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:29.875 [2024-07-23 18:46:29.766645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:29.875 [2024-07-23 18:46:29.766652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:29.875 [2024-07-23 18:46:29.766660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:29.875 [2024-07-23 18:46:29.766670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:29.875 [2024-07-23 18:46:29.766679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:29.875 [2024-07-23 18:46:29.766694] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:29.875 [2024-07-23 18:46:29.766715] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 13fadea6-e45b-447a-8eb6-e8ad4e453dcb 00:28:29.875 [2024-07-23 18:46:29.766723] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 133632 00:28:29.875 [2024-07-23 18:46:29.766731] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 4896 00:28:29.875 [2024-07-23 18:46:29.766740] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 4864 00:28:29.875 [2024-07-23 18:46:29.766752] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0066 00:28:29.875 [2024-07-23 18:46:29.766759] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:29.875 [2024-07-23 18:46:29.766767] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:29.875 [2024-07-23 18:46:29.766775] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:29.875 [2024-07-23 18:46:29.766781] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:29.875 [2024-07-23 18:46:29.766788] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:29.875 [2024-07-23 18:46:29.766795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:29.875 [2024-07-23 18:46:29.766803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:29.875 [2024-07-23 18:46:29.766811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.888 ms 00:28:29.875 [2024-07-23 18:46:29.766819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.875 [2024-07-23 18:46:29.769732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:29.875 [2024-07-23 18:46:29.769759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:29.875 [2024-07-23 18:46:29.769768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.900 ms 00:28:29.875 [2024-07-23 18:46:29.769775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.875 [2024-07-23 18:46:29.769961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:29.875 [2024-07-23 18:46:29.769970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:29.875 [2024-07-23 18:46:29.769978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.164 ms 00:28:29.875 [2024-07-23 18:46:29.769985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.875 [2024-07-23 18:46:29.780216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:29.875 [2024-07-23 18:46:29.780255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:29.875 [2024-07-23 18:46:29.780265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:29.875 [2024-07-23 18:46:29.780273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.875 [2024-07-23 18:46:29.780337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:29.875 [2024-07-23 18:46:29.780346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:29.875 [2024-07-23 18:46:29.780355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:29.875 [2024-07-23 18:46:29.780363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.875 [2024-07-23 18:46:29.780434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:29.876 [2024-07-23 18:46:29.780453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:29.876 [2024-07-23 18:46:29.780461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:29.876 [2024-07-23 18:46:29.780468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.876 [2024-07-23 18:46:29.780485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:29.876 [2024-07-23 18:46:29.780493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:29.876 [2024-07-23 18:46:29.780502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:29.876 [2024-07-23 18:46:29.780510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.876 [2024-07-23 18:46:29.804655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:29.876 [2024-07-23 18:46:29.804702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:29.876 [2024-07-23 18:46:29.804715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:29.876 [2024-07-23 18:46:29.804722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.876 [2024-07-23 18:46:29.819188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:29.876 [2024-07-23 18:46:29.819231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:29.876 [2024-07-23 18:46:29.819243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:29.876 [2024-07-23 18:46:29.819251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.876 [2024-07-23 18:46:29.819336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:29.876 [2024-07-23 18:46:29.819345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:29.876 [2024-07-23 18:46:29.819361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:29.876 [2024-07-23 18:46:29.819369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.876 [2024-07-23 18:46:29.819403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:29.876 [2024-07-23 18:46:29.819426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:29.876 [2024-07-23 18:46:29.819435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:29.876 [2024-07-23 18:46:29.819442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.876 [2024-07-23 18:46:29.819504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:29.876 [2024-07-23 18:46:29.819523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:29.876 [2024-07-23 18:46:29.819538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:29.876 [2024-07-23 18:46:29.819549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.876 [2024-07-23 18:46:29.819601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:29.876 [2024-07-23 18:46:29.819612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:29.876 [2024-07-23 18:46:29.819621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:29.876 [2024-07-23 18:46:29.819628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.876 [2024-07-23 18:46:29.819678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:29.876 [2024-07-23 18:46:29.819689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:29.876 [2024-07-23 18:46:29.819696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:29.876 [2024-07-23 18:46:29.819707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.876 [2024-07-23 18:46:29.819762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:29.876 [2024-07-23 18:46:29.819778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:29.876 [2024-07-23 18:46:29.819786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:29.876 [2024-07-23 18:46:29.819793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.876 [2024-07-23 18:46:29.819923] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 56.472 ms, result 0 00:28:30.136 00:28:30.136 00:28:30.396 18:46:30 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:32.299 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:28:32.299 18:46:31 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:28:32.299 18:46:31 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:28:32.299 18:46:31 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:28:32.299 18:46:32 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:32.299 18:46:32 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:32.299 Process with pid 94927 is not found 00:28:32.299 18:46:32 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 94927 00:28:32.299 18:46:32 ftl.ftl_restore_fast -- common/autotest_common.sh@946 -- # '[' -z 94927 ']' 00:28:32.299 18:46:32 ftl.ftl_restore_fast -- common/autotest_common.sh@950 -- # kill -0 94927 00:28:32.299 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (94927) - No such process 00:28:32.299 18:46:32 ftl.ftl_restore_fast -- common/autotest_common.sh@973 -- # echo 'Process with pid 94927 is not found' 00:28:32.299 Remove shared memory files 00:28:32.299 18:46:32 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:28:32.299 18:46:32 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:32.299 18:46:32 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:28:32.299 18:46:32 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_13fadea6-e45b-447a-8eb6-e8ad4e453dcb_band_md /dev/hugepages/ftl_13fadea6-e45b-447a-8eb6-e8ad4e453dcb_l2p_l1 /dev/hugepages/ftl_13fadea6-e45b-447a-8eb6-e8ad4e453dcb_l2p_l2 /dev/hugepages/ftl_13fadea6-e45b-447a-8eb6-e8ad4e453dcb_l2p_l2_ctx /dev/hugepages/ftl_13fadea6-e45b-447a-8eb6-e8ad4e453dcb_nvc_md /dev/hugepages/ftl_13fadea6-e45b-447a-8eb6-e8ad4e453dcb_p2l_pool /dev/hugepages/ftl_13fadea6-e45b-447a-8eb6-e8ad4e453dcb_sb /dev/hugepages/ftl_13fadea6-e45b-447a-8eb6-e8ad4e453dcb_sb_shm /dev/hugepages/ftl_13fadea6-e45b-447a-8eb6-e8ad4e453dcb_trim_bitmap /dev/hugepages/ftl_13fadea6-e45b-447a-8eb6-e8ad4e453dcb_trim_log /dev/hugepages/ftl_13fadea6-e45b-447a-8eb6-e8ad4e453dcb_trim_md /dev/hugepages/ftl_13fadea6-e45b-447a-8eb6-e8ad4e453dcb_vmap 00:28:32.299 18:46:32 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:28:32.299 18:46:32 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:32.299 18:46:32 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:28:32.299 ************************************ 00:28:32.299 END TEST ftl_restore_fast 00:28:32.299 ************************************ 00:28:32.299 00:28:32.299 real 2m39.102s 00:28:32.299 user 2m27.436s 00:28:32.299 sys 0m12.870s 00:28:32.299 18:46:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:32.299 18:46:32 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:28:32.299 18:46:32 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:28:32.299 18:46:32 ftl -- ftl/ftl.sh@14 -- # killprocess 88179 00:28:32.299 18:46:32 ftl -- common/autotest_common.sh@946 -- # '[' -z 88179 ']' 00:28:32.299 18:46:32 ftl -- common/autotest_common.sh@950 -- # kill -0 88179 00:28:32.299 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (88179) - No such process 00:28:32.299 Process with pid 88179 is not found 00:28:32.300 18:46:32 ftl -- common/autotest_common.sh@973 -- # echo 'Process with pid 88179 is not found' 00:28:32.300 18:46:32 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:28:32.300 18:46:32 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=96638 00:28:32.300 18:46:32 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:32.300 18:46:32 ftl -- ftl/ftl.sh@20 -- # waitforlisten 96638 00:28:32.300 18:46:32 ftl -- common/autotest_common.sh@827 -- # '[' -z 96638 ']' 00:28:32.300 18:46:32 ftl -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:32.300 18:46:32 ftl -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:32.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:32.300 18:46:32 ftl -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:32.300 18:46:32 ftl -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:32.300 18:46:32 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:32.300 [2024-07-23 18:46:32.218235] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:28:32.300 [2024-07-23 18:46:32.218381] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96638 ] 00:28:32.559 [2024-07-23 18:46:32.364186] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:32.559 [2024-07-23 18:46:32.431429] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:33.127 18:46:32 ftl -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:33.127 18:46:32 ftl -- common/autotest_common.sh@860 -- # return 0 00:28:33.127 18:46:32 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:28:33.387 nvme0n1 00:28:33.387 18:46:33 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:28:33.387 18:46:33 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:33.387 18:46:33 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:28:33.387 18:46:33 ftl -- ftl/common.sh@28 -- # stores=2285af2d-f535-4d00-8e74-466dec12a3fb 00:28:33.387 18:46:33 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:28:33.387 18:46:33 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2285af2d-f535-4d00-8e74-466dec12a3fb 00:28:33.645 18:46:33 ftl -- ftl/ftl.sh@23 -- # killprocess 96638 00:28:33.645 18:46:33 ftl -- common/autotest_common.sh@946 -- # '[' -z 96638 ']' 00:28:33.645 18:46:33 ftl -- common/autotest_common.sh@950 -- # kill -0 96638 00:28:33.645 18:46:33 ftl -- common/autotest_common.sh@951 -- # uname 00:28:33.645 18:46:33 ftl -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:33.645 18:46:33 ftl -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 96638 00:28:33.645 18:46:33 ftl -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:33.645 18:46:33 ftl -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:33.645 killing process with pid 96638 00:28:33.645 18:46:33 ftl -- common/autotest_common.sh@964 -- # echo 'killing process with pid 96638' 00:28:33.645 18:46:33 ftl -- common/autotest_common.sh@965 -- # kill 96638 00:28:33.645 18:46:33 ftl -- common/autotest_common.sh@970 -- # wait 96638 00:28:34.581 18:46:34 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:34.581 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:34.840 Waiting for block devices as requested 00:28:34.840 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:28:34.840 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:28:35.098 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:28:35.098 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:28:40.370 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:28:40.370 18:46:40 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:28:40.370 Remove shared memory files 00:28:40.370 18:46:40 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:40.370 18:46:40 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:28:40.370 18:46:40 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:28:40.370 18:46:40 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:28:40.370 18:46:40 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:40.370 18:46:40 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:28:40.370 00:28:40.370 real 12m6.296s 00:28:40.370 user 14m15.621s 00:28:40.370 sys 1m29.105s 00:28:40.370 18:46:40 ftl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:40.370 18:46:40 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:40.370 ************************************ 00:28:40.370 END TEST ftl 00:28:40.370 ************************************ 00:28:40.370 18:46:40 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:28:40.370 18:46:40 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:28:40.370 18:46:40 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:28:40.370 18:46:40 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:28:40.370 18:46:40 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:28:40.371 18:46:40 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:28:40.371 18:46:40 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:28:40.371 18:46:40 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:28:40.371 18:46:40 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:28:40.371 18:46:40 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:28:40.371 18:46:40 -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:40.371 18:46:40 -- common/autotest_common.sh@10 -- # set +x 00:28:40.371 18:46:40 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:28:40.371 18:46:40 -- common/autotest_common.sh@1388 -- # local autotest_es=0 00:28:40.371 18:46:40 -- common/autotest_common.sh@1389 -- # xtrace_disable 00:28:40.371 18:46:40 -- common/autotest_common.sh@10 -- # set +x 00:28:42.276 INFO: APP EXITING 00:28:42.276 INFO: killing all VMs 00:28:42.276 INFO: killing vhost app 00:28:42.276 INFO: EXIT DONE 00:28:42.534 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:43.102 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:28:43.102 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:28:43.102 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:28:43.102 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:28:43.670 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:43.929 Cleaning 00:28:43.929 Removing: /var/run/dpdk/spdk0/config 00:28:43.929 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:28:43.929 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:28:43.929 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:28:43.929 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:28:43.929 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:28:43.929 Removing: /var/run/dpdk/spdk0/hugepage_info 00:28:43.929 Removing: /var/run/dpdk/spdk0 00:28:43.929 Removing: /var/run/dpdk/spdk_pid74077 00:28:43.929 Removing: /var/run/dpdk/spdk_pid74233 00:28:43.929 Removing: /var/run/dpdk/spdk_pid74431 00:28:43.929 Removing: /var/run/dpdk/spdk_pid74514 00:28:43.929 Removing: /var/run/dpdk/spdk_pid74539 00:28:43.929 Removing: /var/run/dpdk/spdk_pid74652 00:28:43.929 Removing: /var/run/dpdk/spdk_pid74669 00:28:43.929 Removing: /var/run/dpdk/spdk_pid74827 00:28:44.188 Removing: /var/run/dpdk/spdk_pid74893 00:28:44.188 Removing: /var/run/dpdk/spdk_pid74964 00:28:44.188 Removing: /var/run/dpdk/spdk_pid75051 00:28:44.188 Removing: /var/run/dpdk/spdk_pid75123 00:28:44.188 Removing: /var/run/dpdk/spdk_pid75163 00:28:44.188 Removing: /var/run/dpdk/spdk_pid75196 00:28:44.188 Removing: /var/run/dpdk/spdk_pid75265 00:28:44.188 Removing: /var/run/dpdk/spdk_pid75360 00:28:44.188 Removing: /var/run/dpdk/spdk_pid75778 00:28:44.188 Removing: /var/run/dpdk/spdk_pid75826 00:28:44.188 Removing: /var/run/dpdk/spdk_pid75878 00:28:44.188 Removing: /var/run/dpdk/spdk_pid75894 00:28:44.188 Removing: /var/run/dpdk/spdk_pid75963 00:28:44.188 Removing: /var/run/dpdk/spdk_pid75979 00:28:44.188 Removing: /var/run/dpdk/spdk_pid76048 00:28:44.188 Removing: /var/run/dpdk/spdk_pid76064 00:28:44.188 Removing: /var/run/dpdk/spdk_pid76117 00:28:44.188 Removing: /var/run/dpdk/spdk_pid76129 00:28:44.188 Removing: /var/run/dpdk/spdk_pid76177 00:28:44.188 Removing: /var/run/dpdk/spdk_pid76195 00:28:44.188 Removing: /var/run/dpdk/spdk_pid76314 00:28:44.188 Removing: /var/run/dpdk/spdk_pid76356 00:28:44.188 Removing: /var/run/dpdk/spdk_pid76426 00:28:44.188 Removing: /var/run/dpdk/spdk_pid76485 00:28:44.188 Removing: /var/run/dpdk/spdk_pid76511 00:28:44.188 Removing: /var/run/dpdk/spdk_pid76578 00:28:44.188 Removing: /var/run/dpdk/spdk_pid76613 00:28:44.188 Removing: /var/run/dpdk/spdk_pid76649 00:28:44.188 Removing: /var/run/dpdk/spdk_pid76684 00:28:44.188 Removing: /var/run/dpdk/spdk_pid76724 00:28:44.188 Removing: /var/run/dpdk/spdk_pid76761 00:28:44.188 Removing: /var/run/dpdk/spdk_pid76796 00:28:44.188 Removing: /var/run/dpdk/spdk_pid76832 00:28:44.188 Removing: /var/run/dpdk/spdk_pid76873 00:28:44.188 Removing: /var/run/dpdk/spdk_pid76903 00:28:44.188 Removing: /var/run/dpdk/spdk_pid76944 00:28:44.188 Removing: /var/run/dpdk/spdk_pid76979 00:28:44.188 Removing: /var/run/dpdk/spdk_pid77015 00:28:44.188 Removing: /var/run/dpdk/spdk_pid77056 00:28:44.188 Removing: /var/run/dpdk/spdk_pid77086 00:28:44.188 Removing: /var/run/dpdk/spdk_pid77127 00:28:44.188 Removing: /var/run/dpdk/spdk_pid77163 00:28:44.188 Removing: /var/run/dpdk/spdk_pid77201 00:28:44.188 Removing: /var/run/dpdk/spdk_pid77245 00:28:44.188 Removing: /var/run/dpdk/spdk_pid77281 00:28:44.188 Removing: /var/run/dpdk/spdk_pid77317 00:28:44.188 Removing: /var/run/dpdk/spdk_pid77383 00:28:44.188 Removing: /var/run/dpdk/spdk_pid77477 00:28:44.188 Removing: /var/run/dpdk/spdk_pid77622 00:28:44.188 Removing: /var/run/dpdk/spdk_pid77695 00:28:44.188 Removing: /var/run/dpdk/spdk_pid77726 00:28:44.188 Removing: /var/run/dpdk/spdk_pid78148 00:28:44.188 Removing: /var/run/dpdk/spdk_pid78235 00:28:44.188 Removing: /var/run/dpdk/spdk_pid78353 00:28:44.188 Removing: /var/run/dpdk/spdk_pid78396 00:28:44.188 Removing: /var/run/dpdk/spdk_pid78416 00:28:44.188 Removing: /var/run/dpdk/spdk_pid78492 00:28:44.188 Removing: /var/run/dpdk/spdk_pid79107 00:28:44.188 Removing: /var/run/dpdk/spdk_pid79142 00:28:44.188 Removing: /var/run/dpdk/spdk_pid79601 00:28:44.188 Removing: /var/run/dpdk/spdk_pid79694 00:28:44.188 Removing: /var/run/dpdk/spdk_pid79803 00:28:44.447 Removing: /var/run/dpdk/spdk_pid79845 00:28:44.447 Removing: /var/run/dpdk/spdk_pid79876 00:28:44.447 Removing: /var/run/dpdk/spdk_pid79896 00:28:44.447 Removing: /var/run/dpdk/spdk_pid81717 00:28:44.447 Removing: /var/run/dpdk/spdk_pid81842 00:28:44.447 Removing: /var/run/dpdk/spdk_pid81852 00:28:44.447 Removing: /var/run/dpdk/spdk_pid81868 00:28:44.447 Removing: /var/run/dpdk/spdk_pid81936 00:28:44.447 Removing: /var/run/dpdk/spdk_pid81940 00:28:44.447 Removing: /var/run/dpdk/spdk_pid81954 00:28:44.447 Removing: /var/run/dpdk/spdk_pid82025 00:28:44.447 Removing: /var/run/dpdk/spdk_pid82035 00:28:44.447 Removing: /var/run/dpdk/spdk_pid82052 00:28:44.447 Removing: /var/run/dpdk/spdk_pid82141 00:28:44.447 Removing: /var/run/dpdk/spdk_pid82145 00:28:44.447 Removing: /var/run/dpdk/spdk_pid82157 00:28:44.447 Removing: /var/run/dpdk/spdk_pid83565 00:28:44.447 Removing: /var/run/dpdk/spdk_pid83644 00:28:44.447 Removing: /var/run/dpdk/spdk_pid84530 00:28:44.447 Removing: /var/run/dpdk/spdk_pid84883 00:28:44.447 Removing: /var/run/dpdk/spdk_pid84950 00:28:44.448 Removing: /var/run/dpdk/spdk_pid85015 00:28:44.448 Removing: /var/run/dpdk/spdk_pid85081 00:28:44.448 Removing: /var/run/dpdk/spdk_pid85168 00:28:44.448 Removing: /var/run/dpdk/spdk_pid85236 00:28:44.448 Removing: /var/run/dpdk/spdk_pid85366 00:28:44.448 Removing: /var/run/dpdk/spdk_pid85637 00:28:44.448 Removing: /var/run/dpdk/spdk_pid85661 00:28:44.448 Removing: /var/run/dpdk/spdk_pid86084 00:28:44.448 Removing: /var/run/dpdk/spdk_pid86260 00:28:44.448 Removing: /var/run/dpdk/spdk_pid86357 00:28:44.448 Removing: /var/run/dpdk/spdk_pid86457 00:28:44.448 Removing: /var/run/dpdk/spdk_pid86499 00:28:44.448 Removing: /var/run/dpdk/spdk_pid86519 00:28:44.448 Removing: /var/run/dpdk/spdk_pid86823 00:28:44.448 Removing: /var/run/dpdk/spdk_pid86861 00:28:44.448 Removing: /var/run/dpdk/spdk_pid86916 00:28:44.448 Removing: /var/run/dpdk/spdk_pid87262 00:28:44.448 Removing: /var/run/dpdk/spdk_pid87399 00:28:44.448 Removing: /var/run/dpdk/spdk_pid88179 00:28:44.448 Removing: /var/run/dpdk/spdk_pid88292 00:28:44.448 Removing: /var/run/dpdk/spdk_pid88511 00:28:44.448 Removing: /var/run/dpdk/spdk_pid88599 00:28:44.448 Removing: /var/run/dpdk/spdk_pid88946 00:28:44.448 Removing: /var/run/dpdk/spdk_pid89204 00:28:44.448 Removing: /var/run/dpdk/spdk_pid89563 00:28:44.448 Removing: /var/run/dpdk/spdk_pid89767 00:28:44.448 Removing: /var/run/dpdk/spdk_pid89883 00:28:44.448 Removing: /var/run/dpdk/spdk_pid89919 00:28:44.448 Removing: /var/run/dpdk/spdk_pid90029 00:28:44.448 Removing: /var/run/dpdk/spdk_pid90043 00:28:44.448 Removing: /var/run/dpdk/spdk_pid90079 00:28:44.448 Removing: /var/run/dpdk/spdk_pid90248 00:28:44.448 Removing: /var/run/dpdk/spdk_pid90489 00:28:44.448 Removing: /var/run/dpdk/spdk_pid90846 00:28:44.448 Removing: /var/run/dpdk/spdk_pid91216 00:28:44.448 Removing: /var/run/dpdk/spdk_pid91617 00:28:44.448 Removing: /var/run/dpdk/spdk_pid92064 00:28:44.448 Removing: /var/run/dpdk/spdk_pid92200 00:28:44.448 Removing: /var/run/dpdk/spdk_pid92276 00:28:44.448 Removing: /var/run/dpdk/spdk_pid92788 00:28:44.707 Removing: /var/run/dpdk/spdk_pid92841 00:28:44.707 Removing: /var/run/dpdk/spdk_pid93253 00:28:44.707 Removing: /var/run/dpdk/spdk_pid93591 00:28:44.707 Removing: /var/run/dpdk/spdk_pid94004 00:28:44.707 Removing: /var/run/dpdk/spdk_pid94126 00:28:44.707 Removing: /var/run/dpdk/spdk_pid94151 00:28:44.707 Removing: /var/run/dpdk/spdk_pid94204 00:28:44.707 Removing: /var/run/dpdk/spdk_pid94254 00:28:44.707 Removing: /var/run/dpdk/spdk_pid94301 00:28:44.707 Removing: /var/run/dpdk/spdk_pid94512 00:28:44.707 Removing: /var/run/dpdk/spdk_pid94586 00:28:44.707 Removing: /var/run/dpdk/spdk_pid94637 00:28:44.707 Removing: /var/run/dpdk/spdk_pid94693 00:28:44.707 Removing: /var/run/dpdk/spdk_pid94728 00:28:44.707 Removing: /var/run/dpdk/spdk_pid94779 00:28:44.707 Removing: /var/run/dpdk/spdk_pid94927 00:28:44.707 Removing: /var/run/dpdk/spdk_pid95174 00:28:44.707 Removing: /var/run/dpdk/spdk_pid95539 00:28:44.707 Removing: /var/run/dpdk/spdk_pid95889 00:28:44.707 Removing: /var/run/dpdk/spdk_pid96264 00:28:44.707 Removing: /var/run/dpdk/spdk_pid96638 00:28:44.707 Clean 00:28:44.707 18:46:44 -- common/autotest_common.sh@1447 -- # return 0 00:28:44.708 18:46:44 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:28:44.708 18:46:44 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:44.708 18:46:44 -- common/autotest_common.sh@10 -- # set +x 00:28:44.708 18:46:44 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:28:44.708 18:46:44 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:44.708 18:46:44 -- common/autotest_common.sh@10 -- # set +x 00:28:44.967 18:46:44 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:28:44.967 18:46:44 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:28:44.967 18:46:44 -- spdk/autotest.sh@389 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:28:44.967 18:46:44 -- spdk/autotest.sh@391 -- # hash lcov 00:28:44.967 18:46:44 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:28:44.967 18:46:44 -- spdk/autotest.sh@393 -- # hostname 00:28:44.967 18:46:44 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:28:44.967 geninfo: WARNING: invalid characters removed from testname! 00:29:11.516 18:47:09 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:12.451 18:47:12 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:14.983 18:47:14 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:16.883 18:47:16 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:18.783 18:47:18 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:21.315 18:47:20 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:23.214 18:47:23 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:29:23.214 18:47:23 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:23.214 18:47:23 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:29:23.214 18:47:23 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:23.214 18:47:23 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:23.214 18:47:23 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.214 18:47:23 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.214 18:47:23 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.214 18:47:23 -- paths/export.sh@5 -- $ export PATH 00:29:23.214 18:47:23 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.214 18:47:23 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:29:23.215 18:47:23 -- common/autobuild_common.sh@440 -- $ date +%s 00:29:23.215 18:47:23 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1721760443.XXXXXX 00:29:23.215 18:47:23 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1721760443.N92gZe 00:29:23.215 18:47:23 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:29:23.215 18:47:23 -- common/autobuild_common.sh@446 -- $ '[' -n v22.11.4 ']' 00:29:23.215 18:47:23 -- common/autobuild_common.sh@447 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:29:23.215 18:47:23 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:29:23.215 18:47:23 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:29:23.215 18:47:23 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:29:23.215 18:47:23 -- common/autobuild_common.sh@456 -- $ get_config_params 00:29:23.215 18:47:23 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:29:23.215 18:47:23 -- common/autotest_common.sh@10 -- $ set +x 00:29:23.215 18:47:23 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:29:23.215 18:47:23 -- common/autobuild_common.sh@458 -- $ start_monitor_resources 00:29:23.215 18:47:23 -- pm/common@17 -- $ local monitor 00:29:23.215 18:47:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:23.215 18:47:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:23.215 18:47:23 -- pm/common@25 -- $ sleep 1 00:29:23.215 18:47:23 -- pm/common@21 -- $ date +%s 00:29:23.215 18:47:23 -- pm/common@21 -- $ date +%s 00:29:23.215 18:47:23 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721760443 00:29:23.215 18:47:23 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721760443 00:29:23.215 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721760443_collect-vmstat.pm.log 00:29:23.215 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721760443_collect-cpu-load.pm.log 00:29:24.146 18:47:24 -- common/autobuild_common.sh@459 -- $ trap stop_monitor_resources EXIT 00:29:24.146 18:47:24 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:29:24.146 18:47:24 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:29:24.146 18:47:24 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:29:24.146 18:47:24 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:29:24.146 18:47:24 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:29:24.146 18:47:24 -- spdk/autopackage.sh@19 -- $ timing_finish 00:29:24.146 18:47:24 -- common/autotest_common.sh@732 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:29:24.146 18:47:24 -- common/autotest_common.sh@733 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:29:24.147 18:47:24 -- common/autotest_common.sh@735 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:29:24.405 18:47:24 -- spdk/autopackage.sh@20 -- $ exit 0 00:29:24.405 18:47:24 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:29:24.405 18:47:24 -- pm/common@29 -- $ signal_monitor_resources TERM 00:29:24.405 18:47:24 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:29:24.405 18:47:24 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:24.405 18:47:24 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:29:24.405 18:47:24 -- pm/common@44 -- $ pid=98350 00:29:24.405 18:47:24 -- pm/common@50 -- $ kill -TERM 98350 00:29:24.405 18:47:24 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:24.405 18:47:24 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:29:24.405 18:47:24 -- pm/common@44 -- $ pid=98352 00:29:24.405 18:47:24 -- pm/common@50 -- $ kill -TERM 98352 00:29:24.405 + [[ -n 6110 ]] 00:29:24.405 + sudo kill 6110 00:29:24.413 [Pipeline] } 00:29:24.432 [Pipeline] // timeout 00:29:24.437 [Pipeline] } 00:29:24.454 [Pipeline] // stage 00:29:24.460 [Pipeline] } 00:29:24.476 [Pipeline] // catchError 00:29:24.486 [Pipeline] stage 00:29:24.488 [Pipeline] { (Stop VM) 00:29:24.502 [Pipeline] sh 00:29:24.834 + vagrant halt 00:29:27.413 ==> default: Halting domain... 00:29:35.537 [Pipeline] sh 00:29:35.815 + vagrant destroy -f 00:29:38.346 ==> default: Removing domain... 00:29:39.294 [Pipeline] sh 00:29:39.576 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:29:39.584 [Pipeline] } 00:29:39.601 [Pipeline] // stage 00:29:39.607 [Pipeline] } 00:29:39.623 [Pipeline] // dir 00:29:39.629 [Pipeline] } 00:29:39.645 [Pipeline] // wrap 00:29:39.651 [Pipeline] } 00:29:39.665 [Pipeline] // catchError 00:29:39.674 [Pipeline] stage 00:29:39.676 [Pipeline] { (Epilogue) 00:29:39.690 [Pipeline] sh 00:29:39.970 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:29:45.267 [Pipeline] catchError 00:29:45.268 [Pipeline] { 00:29:45.283 [Pipeline] sh 00:29:45.602 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:29:45.602 Artifacts sizes are good 00:29:45.611 [Pipeline] } 00:29:45.628 [Pipeline] // catchError 00:29:45.640 [Pipeline] archiveArtifacts 00:29:45.648 Archiving artifacts 00:29:45.775 [Pipeline] cleanWs 00:29:45.787 [WS-CLEANUP] Deleting project workspace... 00:29:45.787 [WS-CLEANUP] Deferred wipeout is used... 00:29:45.794 [WS-CLEANUP] done 00:29:45.796 [Pipeline] } 00:29:45.815 [Pipeline] // stage 00:29:45.821 [Pipeline] } 00:29:45.838 [Pipeline] // node 00:29:45.844 [Pipeline] End of Pipeline 00:29:45.885 Finished: SUCCESS